text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Solar Radiation Parameters for Assessing Temperature Distributions on Bridge Cross-Sections Solar radiation is one of the most important factors influencing the temperature distribution on bridge girder cross-sections. The bridge temperature distribution can be estimated using estimation models that incorporate solar radiation data; however, such data could be costor time-prohibitive to obtain. A review of literature was carried out on estimation models for solar radiation parameters, including the global solar radiation, beam solar radiation and diffuse solar radiation. Solar radiation data from eight cities in Fujian Province in southeastern China were obtained on site. Solar radiation models applicable to Fujian, China were proposed and verified using the measured data. The linear Ångström–Page model (based on sunshine duration) can be used to estimate the daily global solar radiation. The Collares-Pereira and Rabl model and the Hottel model can be used to estimate the hourly global solar radiation and the beam solar radiation, respectively. Three bridges were chosen as case study, for which the temperature distribution on girder cross-sections were monitored on site. Finite element models (FEM) of cross-sections of bridge girders were implemented using the Midas program. The temperature–time curves obtained from FEM showed very close agreement with the measured values for summertime. Ignoring the solar radiation effect would result in lower and delayed temperature peaks. However, the influence of solar radiation on the temperature distribution in winter is negligible. Introduction Temperature distribution in bridge structures built in the outdoor environment are appreciably influenced by solar radiation and ambient temperature variations [1].The thermal energy associated with solar radiation can result in an increase in temperature at the top surfaces of the structure beyond the ambient temperature.Solar radiation is one of the most important factors affecting the temperature distribution in bridge structures [2].Heat conduction can further affect the temperature distribution.Although there are many experimental studies that address temperature distribution in bridges [3][4][5][6][7][8], few consider and measure solar radiation when assessing bridge temperature distributions.When carrying out a dynamic heat transfer analysis on bridge structures, the geometry of cross-section, initial thermal state, varying boundary conditions, thermal properties of materials, etc. should be input into any Finite Element Method (FEM) software used.The solar radiation levels on different parts of bridge structures represent one of the time-varying boundary conditions, which can be estimated by considering the various model parameters.Taking box girder bridges as example, the external surface of the top flange is influenced by both the beam and diffuse solar radiation, while the external surface of the web is influenced by a combination of the beam solar radiation, diffuse The global solar radiation is mainly related to meteorological factors such as the duration of sunshine, extent of cloud cover, ambient temperature and so on.Accordingly, empirical regression analyses or artificial intelligence techniques are used by researchers to establish the estimation models of daily global solar radiation based on different meteorological factors. • Sunshine duration fraction models In 1924, a linear equation relating the clearness index and the sunshine duration fraction was proposed by Ångström [10], as shown below: where H/H C is the clearness index, H is the daily global solar radiation (averaged over one month), H C is the average clear-day global solar radiation, S/S 0 is the sunshine duration fraction, S is the daily sunshine duration (averaged over one month), S 0 is the maximum daily sunshine duration (averaged over one month), and a and b are empirical coefficients.However, it is difficult to properly define the concept of "clear day" in the equation proposed by Ångström.Therefore, a daily extraterrestrial solar radiation on a horizontal surface (averaged over one month) (H 0 ) was proposed by Page to be used instead of H C , resulting in the Ångström-Page equation as follows [11]: H/H 0 = a' + b'(S/S 0 ) where a' and b' are empirical coefficients in Ångström-Page equation.The Ångström-Page equation is one of the classical equations for estimation of the daily global solar radiation.Different mathematical expressions have been proposed by other researchers to improve its accuracy, including using quadratic [12], cubic [13], logarithmic [14], exponential [15] and power expressions [16].However, the improvement in the accuracy of equations using these expressions is not obvious when comparing with the linear expression [17].Therefore, the linear Ångström-Page equation is generally used due its sufficient accuracy and lower computational effort.In China, research on global solar radiation models began in the 1960s, and was based on the Ångström-Page equation with empirical coefficients for different geographic zones [18].However, China is vast in territory, and environmental conditions in different regions are obviously different.Thus, it is difficult to establish a common set of empirical coefficients applicable to all regions.Some researchers have used other meteorological and geographic parameters such as the ambient temperature, altitude, longitude and latitude information, relative humidity, and cloud cover to improve the accuracy of the Ångström-Page equation [19][20][21].It is noted that sunshine duration has the greatest influence on the daily global solar radiation compared with other meteorological factors.Moreover, considering other meteorological factors may reduce the accuracy of the estimated values [22]. • Non-sunshine duration models The ambient temperature was chosen by some researchers to establish an estimation model for global solar radiation.The difference between the daily maximum and minimum temperatures was used to predict the daily global solar radiation by Hargreaves and Samani (H-S model) in 1982 [23].To improve the accuracy of the H-S model, some meteorological factors such as the altitude [24], and precipitation [25][26][27] were considered.Based on the H-S model, an exponential equation considering the daily temperature range was proposed by Bristow and Campbell (B-C model) in 1984 [28].Grillone et al. compared the results of these empirical models with measured data from the Mediterranean region and concluded that the H-S model had the highest accuracy [29].However, Liu et al. suggested using the B-C model after comparing various empirical models with measured data from 15 meteorological stations in Northern China due to better accuracy and ease of determining the empirical factors [30].It should be noted that the estimation error for models based on ambient temperature is larger than for those based on sunshine duration [22]. The extent of cloud cover was chosen by some researchers to establish estimation models for global solar radiation due to the lack of measured sunshine duration in some regions, especially in the oceans, mountains and deserts, where few meteorological stations exist.However, cloud cover is usually noted visually, which may have great uncertainty and error.An increase in cloud cover would reduce the beam solar ration and increase the diffuse solar radiation.A linear equation for the clearness index and average total cloud cover was proposed by Kimball based on the measured data from many meteorological stations in the U.S. [31].The relationships between the clearness index and the sunshine duration fraction, and between the clearness index and the extent of cloud cover, were discussed by Bennett in 1965 [32].It is reported that the relationship between the clearness index and the sunshine duration fraction is more evident compared to other parameters.Three types of estimation models were presented by Wang based on the sunshine duration fraction, the extent of cloud cover, and the combination of the two [33].Results indicated that the model considering the sunshine duration fraction had the highest accuracy. • Artificial intelligence approach The artificial intelligence approach has been utilized by some researchers to predict the global solar radiation [22,[34][35][36][37].However, more factors are involved in the artificial intelligence techniques compared to empirical models.Consequently, the computational effort is significantly higher, making them less convenient for engineering applications.Overall, the empirical model for daily global solar radiation based on the sunshine duration fraction is preferred for engineering applications. Estimation Model for Hourly Global Solar Radiation Hourly global solar radiation data are needed for research on temperature distribution on bridge girder cross-sections.Research on hourly global solar radiation is less frequent than its daily counterpart.A linear relationship between hourly and daily global solar radiation was proposed by Liu and Jordan in 1960 using the following Equation [38]: where r T is the proportionality coefficient, I is the hourly global solar radiation, H is the daily global solar radiation, ω is the solar hour angle, and ω s is the sunset hour angle.However, some researchers have reported that a linear relationship only exists in a clear day [39].The equation proposed by Liu and Jordan was modified by Collares-Pereira and Rabl as follows [40]: where a = 0.409 + 0.5016sin(ω s − 60) and b = 0.6609 − 0.4767sin(ω s − 60).The validity of this equation was verified by measured data [41,42].Although some modifications were proposed by other researchers [42][43][44], the Collares-Pereira and Rabl equation is the most frequently used model for estimating the hourly global solar radiation. The Gaussian function was considered by some researchers to estimate the hourly global solar radiation, which assumes that the meteorological parameters are all stochastic and the variation of hourly global solar radiation follows a normal distribution.However, this approach can only be used for a clear day [45,46]. Estimation Model for Beam and Diffuse Solar Radiation The beam solar radiation is related to the solar altitude, atmospheric transparency, cloud cover, altitude and so on [39].The diffuse solar radiation is related to the incident angle of solar radiation, atmospheric condition and so on [39].Literature on the beam or diffuse solar radiation are less frequent compared to the global solar radiation.Most of the research is focused on estimation models for beam and diffuse solar radiation under clear conditions.Moreover, most such estimation models are based on the hourly time scale.The model suggested by American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE model) is a widely used estimation model for beam and diffuse solar radiation in a clear day [47].Some modifications have been proposed to improve its accuracy [48][49][50]. A relationship between the transmission ratio of beam solar radiation τ b and the transmission ratio of diffuse solar radiation τ d was proposed by Liu and Jordan in 1960 [38] under the assumption that the atmosphere is transparent. where τ d = I d /I 0 , τ b = I b /I 0 , I d is the hourly diffuse solar radiation, I b is the hourly beam solar radiation, and I 0 is the hourly extraterrestrial solar radiation on a horizontal surface. The following equation was proposed by Hottel in 1976 for estimating the transmission ratio of beam solar radiation based on the solar altitude angle and geographic altitude [51]. where a 0 , a 1 and k are empirical coefficients considering the altitude and meteorological conditions, and h is the solar altitude angle.Other meteorological factors, such as ambient temperature and relative humidity, as well as the artificial intelligence approach, were considered by some researchers to establish a numerical relationship for beam solar radiation and diffuse solar radiation [35,52,53].However, a combination of the Liu and Jordan equation and Hottel equation has been used by researchers to estimate the hourly beam and diffuse solar radiation values. Monitoring of Solar Radiation The maximum solar radiation in a clear day is generally considered to provide the maximum sunshine effect on the temperature distribution in bridge structures.In this paper, the hourly global solar radiation and hourly diffuse solar radiation for eight cities in Fujian were measured using an automated meteorological station, as shown in Figure 1.The cities (from north to south) and the corresponding measurement dates are shown in Figure 2 Monitoring of Solar Radiation The maximum solar radiation in a clear day is generally considered to provide the maximum sunshine effect on the temperature distribution in bridge structures.In this paper, the hourly global solar radiation and hourly diffuse solar radiation for eight cities in Fujian were measured using an automated meteorological station, as shown in Figure 1.The cities (from north to south) and the corresponding measurement dates are shown in Figure 2 Monitoring of Solar Radiation The maximum solar radiation in a clear day is generally considered to provide the maximum sunshine effect on the temperature distribution in bridge structures.In this paper, the hourly global solar radiation and hourly diffuse solar radiation for eight cities in Fujian were measured using an automated meteorological station, as shown in Figure 1.The cities (from north to south) and the corresponding measurement dates are shown in Figure 2 The measured hourly global and diffuse solar radiation curves for the eight cities on a clear day are illustrated in Figure 3a,b, respectively.The hourly beam solar radiation can be calculated by subtracting the measured hourly diffuse solar radiation from the corresponding hourly global solar radiation.The calculated hourly beam solar radiation curves are shown in Figure 3c.The variation trend for all hourly solar radiation curves are similar, with the solar radiation appearing at about 06:00, reaching the peak value at about 12:00, and disappearing at about 18:00. The global solar radiation is related to parameters such as latitude, altitude and atmospheric quality.The influence of latitude on the global solar radiation in Fujian is negligible, because all cities are located between 23 • and 28 • northern latitude.The maximum measured global solar radiation for Ninghua (about 1100 W/m 2 ) was the highest among all cities, because this city has the highest altitude (as shown in Figure 3a).However, the difference between maximum global solar radiation values for different cities was not large (about 100 W/m 2 ).The maximum measured diffuse solar radiation and the calculated beam solar radiation were in Fu'an (about 250 W/m 2 ) and Ninghua (about 900 W/m 2 ), respectively, as illustrated in Figure 3b,c.The difference between maximum diffuse or beam solar radiation values for different cities was about 150 W/m 2 . The measured hourly global and diffuse solar radiation curves for the eight cities on a clear day are illustrated in Figure 3a,b, respectively.The hourly beam solar radiation can be calculated by subtracting the measured hourly diffuse solar radiation from the corresponding hourly global solar radiation.The calculated hourly beam solar radiation curves are shown in Figure 3c.The variation trend for all hourly solar radiation curves are similar, with the solar radiation appearing at about 06:00, reaching the peak value at about 12:00, and disappearing at about 18:00. The global solar radiation is related to parameters such as latitude, altitude and atmospheric quality.The influence of latitude on the global solar radiation in Fujian is negligible, because all cities are located between 23° and 28° northern latitude.The maximum measured global solar radiation for Ninghua (about 1100 W/m 2 ) was the highest among all cities, because this city has the highest altitude (as shown in Figure 3a).However, the difference between maximum global solar radiation values for different cities was not large (about 100 W/m 2 ).The maximum measured diffuse solar radiation and the calculated beam solar radiation were in Fu'an (about 250 W/m 2 ) and Ninghua (about 900 W/m 2 ), respectively, as illustrated in Figure 3b,c Estimation Model for Daily Global Solar Radiation As discussed earlier, the Å ngström-Page equation [10,11] was chosen in this study to predict the daily global solar radiation for different cities in Fujian.The daily global solar radiation (averaged over one month) for the entire Fujian cannot be determined because only two meteorological stations in Fuzhou and Jian'ou provide such data.Instead, the daily global solar radiation (averaged over one month) from 1983 to 2005 provided in a U.S. NASA database was used in this analysis, since the Estimation Model for Daily Global Solar Radiation As discussed earlier, the Ångström-Page equation [10,11] was chosen in this study to predict the daily global solar radiation for different cities in Fujian.The daily global solar radiation (averaged over one month) for the entire Fujian cannot be determined because only two meteorological stations in Fuzhou and Jian'ou provide such data.Instead, the daily global solar radiation (averaged over one month) from 1983 to 2005 provided in a U.S. NASA database was used in this analysis, since the NASA database is used by many researchers when meteorological data are lacking [54][55][56].The NASA database divides the map of China into grids based on the longitude and latitude and provides data for each grid point.The validity of the data in NASA database has been verified by comparing with solar radiation data from 88 meteorological stations in China measured before 2010 [57]. The daily astronomical solar radiation can be calculated using earth-sun distance, solar elevation, duration of sunlight and so on [11].The historical average sunshine duration (averaged over one month) since 1951 for four cities in Fujian (Nanping, Fuzhou, Yong'an and Xiamen) can be obtained from the National Meteorological Information Center, China Meteorological Administration [58].The average maximum possible sunshine duration (averaged over one month) can be calculated using the following equation: where ω s is the sunset hour angle.The empirical coefficients a' and b' for different cities in Fujian can be calculated by using a linear regression analysis between the clearness index H/H C and the sunshine duration fraction S/S 0 . Influence of Time Scale on Empirical Coefficients The daily clearness index and the sunshine duration fraction (both averaged over one month) were chosen by Ångström and Page [10,11].The daily solar radiation data measured by 36 meteorological stations in China were compared with the calculated daily solar radiation using different time scales.The accuracy of estimated solar radiation using the monthly time scale is higher than those using daily or yearly time scales [59].Fuzhou was chosen as an example to analyze the influence of different time scales (daily, monthly and yearly) on the empirical coefficients in the Ångström-Page equation.The results are listed in Table 1 and illustrated in Figure 4, in which the hollow points denote the data and the solid line denotes the linear regression.The coefficient of correlation (γ xy ) for the linear regression analysis using the monthly time scale is the largest among different time scales.Moreover, the root-mean-square error (RMSE) using the monthly time scale is the smallest.As a result, the monthly time scale was chosen to calculate the empirical coefficients in the Ångström-Page equation.NASA database is used by many researchers when meteorological data are lacking [54][55][56].The NASA database divides the map of China into grids based on the longitude and latitude and provides data for each grid point.The validity of the data in NASA database has been verified by comparing with solar radiation data from 88 meteorological stations in China measured before 2010 [57]. The daily astronomical solar radiation can be calculated using earth-sun distance, solar elevation, duration of sunlight and so on [11].The historical average sunshine duration (averaged over one month) since 1951 for four cities in Fujian (Nanping, Fuzhou, Yong'an and Xiamen) can be obtained from the National Meteorological Information Center, China Meteorological Administration [58].The average maximum possible sunshine duration (averaged over one month) can be calculated using the following equation: where ωs is the sunset hour angle.The empirical coefficients a' and b' for different cities in Fujian can be calculated by using a linear regression analysis between the clearness index H/HC and the sunshine duration fraction S/S0. Influence of Time Scale on Empirical Coefficients The daily clearness index and the sunshine duration fraction (both averaged over one month) were chosen by Å ngström and Page [10,11].The daily solar radiation data measured by 36 meteorological stations in China were compared with the calculated daily solar radiation using different time scales.The accuracy of estimated solar radiation using the monthly time scale is higher than those using daily or yearly time scales [59].Fuzhou was chosen as an example to analyze the influence of different time scales (daily, monthly and yearly) on the empirical coefficients in the Å ngström-Page equation.The results are listed in Table 1 and illustrated in Figure 4, in which the hollow points denote the data and the solid line denotes the linear regression.The coefficient of correlation (γxy) for the linear regression analysis using the monthly time scale is the largest among different time scales.Moreover, the root-mean-square error (RMSE) using the monthly time scale is the smallest.As a result, the monthly time scale was chosen to calculate the empirical coefficients in the Å ngström-Page equation. Influence of Sunshine Duration on the Empirical Coefficients Aside from the time scale, the actual sunshine duration would also affect the empirical coefficients in the Å ngström-Page equation.The measured average sunshine duration (averaged over one month) since 1951 for four cities in Fujian can be obtained from National Meteorological Information Center, China Meteorological Administration [58].Hua'an was chosen as an example to illustrate the influence of sunshine duration on the empirical coefficients.The results listed in Table 2 show that the coefficient of correlation obtained using the sunshine duration are largest for Xiamen, which is nearest to Hua'an.Moreover, the root-mean-square error obtained by using the sunshine duration of Xiamen is the smallest.Consequently, the empirical coefficients for cities in the Fujian that did not have measured sunshine duration data can be predicted using the available sunshine duration for the nearest city (Nanping, Fuzhou, Yong'an, or Xiamen).To verify the applicability and accuracy of the Å ngström-Page equation in Fujian, the accuracy of linear regression estimates for all eight cities are listed in Table 3.For all cities, the coefficients of correlation are larger than 0.84, and the root-mean-square errors are less than 0.05.Therefore, the accuracy of the Å ngström-Page model is considered good, and this model can be used to predict the global solar radiation for different cities in the Fujian. Influence of Sunshine Duration on the Empirical Coefficients Aside from the time scale, the actual sunshine duration would also affect the empirical coefficients in the Ångström-Page equation.The measured average sunshine duration (averaged over one month) since 1951 for four cities in Fujian can be obtained from National Meteorological Information Center, China Meteorological Administration [58].Hua'an was chosen as an example to illustrate the influence of sunshine duration on the empirical coefficients.The results listed in Table 2 show that the coefficient of correlation obtained using the sunshine duration are largest for Xiamen, which is nearest to Hua'an.Moreover, the root-mean-square error obtained by using the sunshine duration of Xiamen is the smallest.Consequently, the empirical coefficients for cities in the Fujian that did not have measured sunshine duration data can be predicted using the available sunshine duration for the nearest city (Nanping, Fuzhou, Yong'an, or Xiamen).To verify the applicability and accuracy of the Ångström-Page equation in Fujian, the accuracy of linear regression estimates for all eight cities are listed in Table 3.For all cities, the coefficients of correlation are larger than 0.84, and the root-mean-square errors are less than 0.05.Therefore, the accuracy of the Ångström-Page model is considered good, and this model can be used to predict the global solar radiation for different cities in the Fujian. Estimation Model for Hourly Global Solar Radiation As discussed in Section 2.1.2,the Collares-Pereira and Rabl equation was chosen to estimate the hourly global solar radiation for different cities in Fujian based on the corresponding daily global solar radiation.The sunshine duration for the eight cities were measured on site.Fuzhou was chosen as example to estimate the hourly global solar radiation.The measured and estimated hourly global solar radiation curves for different cities are compared in Figure 5, with the hollow point denoting the measured values and the solid line showing the estimated results.The minimum coefficients of correlation for all eight cities is 0.989.Therefore, the hourly global solar radiation for different cities in Fujian can be accurately predicted by the Collares-Pereira and Rabl equation. Estimation Model for Hourly Global Solar Radiation As discussed in Section 2.1.2,the Collares-Pereira and Rabl equation was chosen to estimate the hourly global solar radiation for different cities in Fujian based on the corresponding daily global solar radiation.The sunshine duration for the eight cities were measured on site.Fuzhou was chosen as example to estimate the hourly global solar radiation.The measured and estimated hourly global solar radiation curves for different cities are compared in Figure 5, with the hollow point denoting the measured values and the solid line showing the estimated results.The minimum coefficients of correlation for all eight cities is 0.989.Therefore, the hourly global solar radiation for different cities in Fujian can be accurately predicted by the Collares-Pereira and Rabl equation.(g) (h) Estimation Model for Hourly Beam Solar Radiation Global solar radiation should be separated into beam and diffuse solar radiation to analyze the temperature distribution on bridge girder cross-sections.As discussed earlier, the Hottel equation was chosen to predict the hourly beam solar radiation for different cities in Fujian.The measured and estimated hourly beam solar radiation curves for the eight cities in Fujian are compared in Figure 6, in which the hollow points denote the measured values and the solid line denotes the estimated results.The minimum coefficients of correlation for all eight cities is 0.974.Therefore, the hourly beam solar radiation for different cities in Fujian can be accurately predicted by the Hottel equation.Global solar radiation should be separated into beam and diffuse solar radiation to analyze the temperature distribution on bridge girder cross-sections.As discussed earlier, the Hottel equation was chosen to predict the hourly beam solar radiation for different cities in Fujian.The measured and estimated hourly beam solar radiation curves for the eight cities in Fujian are compared in Figure 6, in which the hollow points denote the measured values and the solid line denotes the estimated results.The minimum coefficients of correlation for all eight cities is 0.974.Therefore, the hourly beam solar radiation for different cities in Fujian can be accurately predicted by the Hottel equation. Estimation Model for Hourly Beam Solar Radiation Global solar radiation should be separated into beam and diffuse solar radiation to analyze the temperature distribution on bridge girder cross-sections.As discussed earlier, the Hottel equation was chosen to predict the hourly beam solar radiation for different cities in Fujian.The measured and estimated hourly beam solar radiation curves for the eight cities in Fujian are compared in Figure 6, in which the hollow points denote the measured values and the solid line denotes the estimated results.The minimum coefficients of correlation for all eight cities is 0.974.Therefore, the hourly beam solar radiation for different cities in Fujian can be accurately predicted by the Hottel equation.(e) (f) Estimation Model for Hourly Diffuse Solar Radiation The hourly diffuse solar radiation can be calculated by subtracting the hourly beam solar radiation from the corresponding hourly global solar radiation.The measured and estimated hourly diffuse solar radiation curves for the eight cities in Fujian are compared in Figure 7, in which the hollow points denote the measured values and the solid line denotes the estimated results.The minimum coefficients of correlation for all eight cities is 0.948.Therefore, the estimated hourly diffuse solar radiation curves are close to the measured values. Estimation Model for Hourly Diffuse Solar Radiation The hourly diffuse solar radiation can be calculated by subtracting the hourly beam solar radiation from the corresponding hourly global solar radiation.The measured and estimated hourly diffuse solar radiation curves for the eight cities in Fujian are compared in Figure 7, in which the hollow points denote the measured values and the solid line denotes the estimated results.The minimum coefficients of correlation for all eight cities is 0.948.Therefore, the estimated hourly diffuse solar radiation curves are close to the measured values. Estimation Model for Hourly Diffuse Solar Radiation The hourly diffuse solar radiation can be calculated by subtracting the hourly beam solar radiation from the corresponding hourly global solar radiation.The measured and estimated hourly diffuse solar radiation curves for the eight cities in Fujian are compared in Figure 7, in which the hollow points denote the measured values and the solid line denotes the estimated results.The minimum coefficients of correlation for all eight cities is 0.948.Therefore, the estimated hourly diffuse solar radiation curves are close to the measured values. Solar Radiation in Fujian The maximum values of the global, beam and diffuse solar radiation (for summer solstice, 21 June) for 56 cities in Fujian were calculated using the models described in Section 3, and are illustrated in Figure 8.The maximum global solar radiation for Xiapu (about 1210 W/m 2 ) is the highest and the maximum global solar radiation for Anxi (about 970 W/m 2 ) is the lowest among all cities, as shown in Figure 8a.The biggest difference among the global solar radiation maxima in different cities is about 240 W/m 2 .The maximum beam solar radiation for Zhouning (about 909 W/m 2 ) is the highest, and the maximum beam solar radiation for Ningde (about 821 W/m 2 ) is the lowest among all cities, as shown in Figure 8b.The biggest difference among the beam solar radiation maxima for different cities is about 88 W/m 2 .The maximum diffuse solar radiation for Xiapu (about 388 W/m 2 ) is the highest and the maximum diffuse solar radiation for Dehua (about 102 W/m 2 ) is the lowest among all cities, as shown in Figure 8c.The biggest difference among the diffuse solar radiation maxima for different cities is about 286 W/m 2 . 8. The maximum global solar radiation for Xiapu (about 1210 W/m 2 ) is the highest and the maximum global solar radiation for Anxi (about 970 W/m 2 ) is the lowest among all cities, as shown in Figure 8a.The biggest difference among the global solar radiation maxima in different cities is about 240 W/m 2 .The maximum beam solar radiation for Zhouning (about 909 W/m 2 ) is the highest, and the maximum beam solar radiation for Ningde (about 821 W/m 2 ) is the lowest among all cities, as shown in Figure 8b.The biggest difference among the beam solar radiation maxima for different cities is about 88 W/m 2 .The maximum diffuse solar radiation for Xiapu (about 388 W/m 2 ) is the highest and the maximum diffuse solar radiation for Dehua (about 102 W/m 2 ) is the lowest among all cities, as shown in Figure 8c A-I).For the thermal parameters of the concrete, specific heat of 960 J/(kg • C), heat conductivity of 1.5 W/(m • C), and density of 2400 kg/m 3 were used [60].The finite element mesh size was set as 0.02 m.There were 21,000 and 19,337 nodes and elements, respectively, as illustrated in Figure 10.Each node had a single temperature degree of freedom.A-I).For the thermal parameters of the concrete, specific heat of 960 J/(kg °C ), heat conductivity of 1.5 W/(m °C ), and density of 2400 kg/m 3 were used [60].The finite element mesh size was set as 0.02 m.There were 21,000 and 19,337 nodes and elements, respectively, as illustrated in Figure 10.Each node had a single temperature degree of freedom.In Figure 11, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 2 August 2010.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.2 °C for the two webs and 1.1 °C for the bottom flange).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 1.7 °C for the two webs and 1.3 °C for the bottom flange) and a time delay (3 h for two webs and the bottom flange).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.Moreover, when solar radiation was not considered, the differences between the highest measured and calculated temperatures at the external surfaces of the box girder (1.7 °C for EW-1, 1.5 °C for WW-1 and 1.3 °C for B-1) were significantly larger than the differences at the internal surfaces of the box girder (0.7 °C for EW-2, 0.2 °C for WW-2 and 0.5 °C for B-2).This is because the influence of solar radiation on the temperature distribution decreases as the distance from external surfaces increases.From Figure 12, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 15:00 on 2 August 2010 (maximum 12.5 °C ).The vertical temperature variation on the cross-section of box girder when considering the effect of solar radiation (19.2 °C ) was significantly larger than that without the consideration of solar radiation (7.0 °C ).Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on box girders.In Figure 11, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 2 August 2010.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.2 • C for the two webs and 1.1 • C for the bottom flange).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 1.7 • C for the two webs and 1.3 • C for the bottom flange) and had a time delay (3 h for two webs and the bottom flange).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.Moreover, when solar radiation was not considered, the differences between the highest measured and calculated temperatures at the external surfaces of the box girder (1.7 • C for EW-1, 1.5 • C for WW-1 and 1.3 • C for B-1) were significantly larger than the differences at the internal surfaces of the box girder (0.7 • C for EW-2, 0.2 • C for WW-2 and 0.5 • C for B-2).This is because the influence of solar radiation on the temperature distribution decreases as the distance from external surfaces increases.From Figure 12, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 15:00 on 2 August 2010 (maximum 12.5 • C).The vertical temperature variation on the cross-section of box girder when considering the effect of solar radiation (19.2 • C) was significantly larger than that without the consideration of solar radiation (7.0 • C).Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on box girders.In Figure 11, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 2 August 2010.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.2 °C for the two webs and 1.1 °C for the bottom flange).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 1.7 °C for the two webs and 1.3 °C for the bottom flange) and had a time delay (3 h for two webs and the bottom flange).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.Moreover, when solar radiation was not considered, the differences between the highest measured and calculated temperatures at the external surfaces of the box girder (1.7 °C for EW-1, 1.5 °C for WW-1 and 1.3 °C for B-1) were significantly larger than the differences at the internal surfaces of the box girder (0.7 °C for EW-2, 0.2 °C for WW-2 and 0.5 °C for B-2).This is because the influence of solar radiation on the temperature distribution decreases as the distance from external surfaces increases.From Figure 12, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 15:00 on 2 August 2010 (maximum 12.5 °C ).The vertical temperature variation on the cross-section of box girder when considering the effect of solar radiation (19.2 °C ) was significantly larger than that without the consideration of solar radiation (7.0 °C ).Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on box girders.consideration of solar radiation, and the dashed line denotes the calculated curves without the consideration of solar radiation.The temperature contour plots obtained from finite element model with or without the consideration of solar radiation are compared in Figure 14 (24:00, 16 December 2010). In Figure 13, it can be found that the variation trends, lowest temperatures and the times at which the lowest temperatures occurred were similar for measured and calculated values without the consideration of solar radiation on 16 December 2010 (maximum 0.3 °C for two webs and 0.2 °C for the bottom flange).In Figure 14, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were negligible at 24:00 on 16 December 2010 (maximum 0.1 °C ).The vertical temperature variation on the cross-section of box girder were similar for calculated values with or without the consideration of solar radiation.Therefore, the influence of solar radiation on the temperature distribution on the cross-section of the box girder is negligible in winter.hollow points denote the measured data, the solid line denotes the calculated curves with the consideration of solar radiation, and the dashed line denotes the calculated curves without the consideration of solar radiation.The temperature contour plots obtained from finite element model with or without the consideration of solar radiation are compared in Figure 14 (24:00, 16 December 2010). In Figure 13, it can be found that the variation trends, lowest temperatures and the times at which the lowest temperatures occurred were similar for measured and calculated values without the consideration of solar radiation on 16 December 2010 (maximum 0.3 °C for two webs and 0.2 °C for the bottom flange).In Figure 14, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were negligible at 24:00 on 16 December 2010 (maximum 0.1 °C ).The vertical temperature variation on the cross-section of box girder were similar for calculated values with or without the consideration of solar radiation.Therefore, the influence of solar radiation on the temperature distribution on the cross-section of the box girder is negligible in winter.In Figure 13, it can be found that the variation trends, lowest temperatures and the times at which the lowest temperatures occurred were similar for measured and calculated values without the consideration of solar radiation on 16 December 2010 (maximum 0.3 • C for two webs and 0.2 • C for the bottom flange).In Figure 14, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were negligible at 24:00 on 16 December 2010 (maximum 0.1 • C).The vertical temperature variation on the cross-section of box girder were similar for calculated values with or without the consideration of solar radiation.Therefore, the influence of solar radiation on the temperature distribution on the cross-section of the box girder is negligible in winter.The 2D plane strain element in Midas-FEA was chosen to establish the finite element model of the cross-sections of side-by-side box girders (Figure 15).The ambient air temperature data were obtained from the automated meteorological station.The solar radiation was calculated using the estimation models described earlier .It can be found observed that the influence of temperature on the thermal parameters of the air is small.Therefore, the thermal parameters of the air (at 100 • C and air 0 • C) were input into the finite element models in summer and winter, respectively.The finite element mesh size was set as 0.02 m.There were 27,370 and 27,190 nodes and elements, respectively, as illustrated in Figure 16.Each node had a single temperature degree of freedom.In Figure 17, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 31 July 2014.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.5 °C for top flanges and 1.8 °C for webs).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 9.3 °C for top flanges and 1.8 °C for webs) and had a time delay (2 h for top flanges).For bottom flanges, the differences between the highest measured and calculated temperatures with and without the consideration of solar radiation were very small (maximum 0.6 °C ).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.In Figure 18, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 16:00 on 31 July 2014 (maximum 10.3 °C ).The vertical temperature variation on the cross-section of side-by-side box girder when considering the effect of solar radiation (15.5 °C ) was significantly larger than that without the consideration of solar radiation (5.3 °C ) in summer.Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on side-by-side box girders, especially for top flanges.In Figure 17, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 31 July 2014.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.5 • C for top flanges and 1.8 • C for webs).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 9.3 • C for top flanges and 1.8 • C for webs) and had a time delay (2 h for top flanges).For bottom flanges, the differences between the highest measured and calculated temperatures with and without the consideration of solar radiation were very small (maximum 0.6 • C).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.In Figure 18, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 16:00 on 31 July 2014 (maximum 10.3 • C).The vertical temperature variation on the cross-section of side-by-side box girder when considering the effect of solar radiation (15.5 • C) was significantly larger than that without the consideration of solar radiation (5.3 • C) in summer.Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on side-by-side box girders, especially for top flanges.In Figure 17, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 31 July 2014.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.5 °C for top flanges and 1.8 °C for webs).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 9.3 °C for top flanges and 1.8 °C for webs) and had a time delay (2 h for top flanges).For bottom flanges, the differences between the highest measured and calculated temperatures with and without the consideration of solar radiation were very small (maximum 0.6 °C ).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.In Figure 18, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 16:00 on 31 July 2014 (maximum 10.3 °C ).The vertical temperature variation on the cross-section of side-by-side box girder when considering the effect of solar radiation (15.5 °C ) was significantly larger than that without the consideration of solar radiation (5.3 °C ) in summer.Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on side-by-side box girders, especially for top flanges.The 2D plane strain element in Midas-FEA was chosen to establish the finite element model of the cross-sections of T-shaped girders (Figure 19).The ambient air temperature data were obtained from the automated meteorological station.The solar radiation was calculated using the estimation models described earlier In Figure 21, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 23 August 2012.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 1.7 °C for top flanges and 1.0 °C for webs).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 7.6 °C for top flanges and 3.1 °C for webs) and had a time delay (2 h for top flanges).For bottom flanges, the differences between the highest measured and calculated temperatures with and without the consideration of solar radiation were very small (maximum 0.3 °C ).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.In Figure 22, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 15:00 on 23 August 2012 (maximum 13.5 °C ).The vertical temperature variation on the cross-section of T-shaped girder when considering the effect of solar radiation (19.7 °C ) was significantly larger than that without the consideration of solar radiation (6.5 °C ) in summer.Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on T-shaped girders, especially for top flanges and webs.In Figure 21, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 23 August 2012.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 1.7 • C for top flanges and 1.0 • C for webs).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 7.6 • C for top flanges and 3.1 • C for webs) and had a time delay (2 h for top flanges).For bottom flanges, the differences between the highest measured and calculated temperatures with and without the consideration of solar radiation were very small (maximum 0.3 • C).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.In Figure 22, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 15:00 on 23 August 2012 (maximum 13.5 • C).The vertical temperature variation on the cross-section of T-shaped girder when considering the effect of solar radiation (19.7 • C) was significantly larger than that without the consideration of solar radiation (6.5 • C) in summer.Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on T-shaped girders, especially for top flanges and webs. Discussion The following discussions can be drawn based on the results and within the limitations of the research presented in this paper: (1) The variation trends for all hourly solar radiation curves are similar, with the solar radiation appearing at about 6:00 a.m., reaching the maxima at about 12:00 p.m. and disappearing at about 6:00 p.m.The maximum measured global solar radiation for Ninghua (about 1100 W/m 2 ) was the highest and the difference between maximum global solar radiation for different cities was about 100 W/m 2 .The maximum measured diffuse solar radiation for Fu'an (about 250 W/m 2 ) and the calculated beam solar radiation for Ninghua (about 900 W/m 2 ) was the highest.The difference between the maximum diffuse or beam solar radiation for different cities were both about 150 W/m 2 .(2) The linear regression of Ångström-Page equation using the monthly time scale of data sample can predict the global solar radiation for different cities in Fujian.For the cities in Fujian that did not have actual data on sunshine duration, the empirical coefficients can be estimated using the available sunshine duration for the nearest city.The highest calculated temperatures without the consideration of solar radiation were lower and had a time delay, especially for top flanges in the summertime.The vertical temperature variation when considering the effect of solar radiation was significantly larger than that without the consideration of solar radiation.The influence of solar radiation on the temperature distribution decreases as the distance from external surfaces.The influence of solar radiation on the temperature distribution of the bridge girder cross-sections is negligible in winter.(6) The solar radiation parameters for other cities and regions in China and elsewhere, as well as different bridge superstructure types, can be similarly developed based on more case studies to establish the relevant estimation models.This can serve as a prelude for future development of specifications related to temperature effects in bridge engineering. Figure 2 . Figure 2. Map of eight cities in Fujian. Figure 2 . Figure 2. Map of eight cities in Fujian.Figure 2. Map of eight cities in Fujian. Figure 2 . Figure 2. Map of eight cities in Fujian.Figure 2. Map of eight cities in Fujian. Figure 4 . Figure 4. Linear regression using different time scales: (a) daily time scale; (b) monthly time scale; and (c) yearly time scale. 4. 3 . Estimation Model for Hourly Beam and Diffuse Solar Radiation 4.3.1.Estimation Model for Hourly Beam Solar Radiation Figure 8 . Figure 8. Maximum solar radiation for 56 cities in Fujian: (a) maximum global solar radiation; (b) maximum beam solar radiation; and (c) maximum diffuse solar radiation. Figure 8 . Figure 8. Maximum solar radiation for 56 cities in Fujian: (a) maximum global solar radiation; (b) maximum beam solar radiation; and (c) maximum diffuse solar radiation. 6 . Influence of Solar Radiation on Temperature Distribution on Bridge Girder Cross-Sections 6.1.Box Girder Bridge 6.1.1.Finite Element Model One 8-span continuous prestressed concrete box girder bridge in Fuzhou was chosen as case study.The dimensions of girder cross-section and the installation locations of temperature sensors are illustrated in Figure 9.The thickness of the concrete overlay is 80 mm.Four temperature sensors were installed near the external surfaces of the top flange (A-T), east web (A-E), west web (A-W) and bottom flange (A-B) to measure the air temperature at different parts of the box girder.One temperature sensor was hung in the box girder (A-I) to measure the air temperature inside the box girder.Six temperature sensors were installed in the concrete within the east web, west web and bottom flange, of which three were near the external surface (EW-1, WW-1 and B-1) and three were near the internal surface of each component (EW-2, WW-2 and B-2).The meteorological data, including hourly global solar radiation, hourly diffuse solar radiation and wind speed, were measured by an automated meteorological station.The monitoring period was from 1 April 2010 to 31 March 2011 and the time increment between measurements was one hour.The software Midas was chosen to establish the finite element model of the cross-section of box girder (Figure 9) based on a heat-conduction model.A 2D plane strain element can used for steady-state or transient analyses.The ambient air temperature data were obtained from the temperature sensors installed near the external surfaces of the box girder.The solar radiation was calculated using the estimation models described earlier.The effects of solar radiation and convective and radiative heat transfer were considered at the external surfaces of the box girder.The surface heat transfer coefficient and the temperature of the surrounding fluid medium were input as boundary conditions for the finite element model.The influence of the solar radiation on different parts of the box girder was considered by using the following rules and assumptions: (a) The external surface of the top flange is influenced by both the beam and diffuse solar radiation.(b) The external surfaces of the two webs are in the shadow due to the top flange.Therefore, they are influenced by the diffuse solar radiation and ground reflection only.(c) The undersides of the top and bottom flanges are influenced by the ground reflection alone.The boundary condition of the internal surface of the box girder in the finite element model was set based on the measured temperature obtained from the temperature sensor hung in the box girder ( 27 6. Appl.Sci.2018, 8, x FOR PEER REVIEW 14 of Influence of Solar Radiation on Temperature Distribution on Bridge Girder Cross-Sections 6.1.Box Girder Bridge 6.1.1.Finite Element Model One 8-span continuous prestressed concrete box girder bridge in Fuzhou was chosen as case study.The dimensions of girder cross-section and the installation locations of temperature sensors are illustrated in Figure 9.The thickness of the concrete overlay is 80 mm.Four temperature sensors were installed near the external surfaces of the top flange (A-T), east web (A-E), west web (A-W) and bottom flange (A-B) to measure the air temperature at different parts of the box girder.One temperature sensor was hung in the box girder (A-I) to measure the air temperature inside the box girder.Six temperature sensors were installed in the concrete within the east web, west web and bottom flange, of which three were near the external surface (EW-1, WW-1 and B-1) and three were near the internal surface of each component (EW-2, WW-2 and B-2).The meteorological data, including hourly global solar radiation, hourly diffuse solar radiation and wind speed, were measured by an automated meteorological station.The monitoring period was from 1 April 2010 to 31 March 2011 and the time increment between measurements was one hour.The software Midas was chosen to establish the finite element model of the cross-section of box girder (Figure 9) based on a heat-conduction model.A 2D plane strain element can used for steadystate or transient analyses.The ambient air temperature data were obtained from the temperature sensors installed near the external surfaces of the box girder.The solar radiation was calculated using the estimation models described earlier.The effects of solar radiation and convective and radiative heat transfer were considered at the external surfaces of the box girder.The surface heat transfer coefficient and the temperature of the surrounding fluid medium were input as boundary conditions for the finite element model.The influence of the solar radiation on different parts of the box girder was considered by using the following rules and assumptions: (a) The external surface of the top flange is influenced by both the beam and diffuse solar radiation.(b) The external surfaces of the two webs are in the shadow due to the top flange.Therefore, they are influenced by the diffuse solar radiation and ground reflection only.(c) The undersides of the top and bottom flanges are influenced by the ground reflection alone.The boundary condition of the internal surface of the box girder in the finite element model was set based on the measured temperature obtained from the temperature sensor hung in the box girder ( Figure 10 . Figure 10.Finite element model of cross-section of box girder.6.1.2.Influence of Solar Radiation on Temperature Distribution on Cross-Section of Box Girder During the monitoring period, 2 August 2010 was a sunny day with high solar radiation.Comparisons of temperature-time curves obtained experimentally or from finite element model with or without the consideration of solar radiation are illustrated in Figure 11 (2 August 2010).The hollow points denote the measured data, the solid line denotes the calculated curves with the consideration of solar radiation, and the dashed line denotes the calculated curves without the consideration of solar radiation.The temperature contour plots obtained from finite element model with or without the consideration of solar radiation are compared in Figure 12 (15:00, 2 August 2010).In Figure11, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 2 August 2010.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.2 °C for the two webs and 1.1 °C for the bottom flange).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 1.7 °C for the two webs and 1.3 °C for the bottom flange) and a time delay (3 h for two webs and the bottom flange).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.Moreover, when solar radiation was not considered, the differences between the highest measured and calculated temperatures at the external surfaces of the box girder (1.7 °C for EW-1, 1.5 °C for WW-1 and 1.3 °C for B-1) were significantly larger than the differences at the internal surfaces of the box girder (0.7 °C for EW-2, 0.2 °C for WW-2 and 0.5 °C for B-2).This is because the influence of solar radiation on the temperature distribution decreases as the distance from external surfaces increases.From Figure12, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 15:00 on 2 August 2010 (maximum 12.5 °C ).The vertical temperature variation on the cross-section of box girder when considering the effect of solar radiation (19.2 °C ) was significantly larger than that without the consideration of solar radiation (7.0 °C ).Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on box girders. Figure 10 . Figure 10.Finite element of cross-section of box girder. Figure 10 . Figure 10.Finite element model of cross-section of box girder. Figure 14 . Figure 14.Temperature contour plots of box girder at 24:00 on 16 December 2010: (a) with solar radiation; and (b) without solar radiation. 6. 2 . Side-by-Side Box Girder Bridge 6.2.1.Finite Element Model One 6-span concrete continuous bridge in Zhangzhou with a cross-section consisting of 11 side-by-side box girders was chosen as case study.The thickness of the concrete overlay is 100 mm.The dimensions of girder cross-sections and the installation locations of temperature sensors are illustrated in Figure 15.Six temperature sensors were installed in the concrete within the top flanges (1-T and 11-T), webs (1-W and 11-W) and bottom flanges (1-B and 11-B) of box girders #1 and The meteorological data, including ambient air temperature, hourly global solar radiation, hourly diffuse solar radiation and wind speed, were measured by an automated meteorological station.The monitoring period was from 30 July 2014 to 3 August 2014 and the time increment between measurements was set as one hour. . The effects of solar radiation and convective and radiative heat transfer were considered at the external surfaces of the side-by-side box girders.The surface heat transfer coefficient and the temperature of the surrounding fluid medium were input as boundary conditions for the finite element model.The influence of the solar radiation on different parts of the side-by-side box girders was considered by using the following rules and assumptions: (a) The external surfaces of the top flanges are influenced by the beam solar radiation and diffuse solar radiation.(b) The webs in the shadow are influenced by the diffuse solar radiation and ground reflection.(c) The webs not in the shadow are influenced by the beam solar radiation, diffuse solar radiation and ground reflection.(d) The undersides of the top and bottom flanges are not influenced by the solar radiation or ground reflection, because the space below the side-by-side box girders is small obtained by the field observation.No temperature gauge was hung inside the side-by-side box girders.Therefore, the boundary condition of the internal surfaces of the side-by-side box girders was simulated using the 2D plane strain element with the thermal parameters of the air (0 • C and 100 • C) for the finite element model.The thermal parameters of the concrete were discussed earlier.For the air (at 0 • C), a specific heat of 714.8 J/(kg • C), heat conductivity of 0.023 W/(m • C) and density of 1.293 kg/m 3 was used.For the air (at 100 • C), a specific heat of 716.9 J/(kg • C), heat conductivity of 0.030 W/(m • C) and density of 0.946 kg/m 3 were used [61] Figure 15 . Figure 15.Layout of cross-section of side-by-side box girder (cm).Figure 15.Layout of cross-section of side-by-side box girder (cm). Figure 15 . Figure 15.Layout of cross-section of side-by-side box girder (cm).Figure 15.Layout of cross-section of side-by-side box girder (cm). Figure 15 . Figure 15.Layout of cross-section of side-by-side box girder (cm). Figure 16 . Figure 16.Finite element model of cross-section of side-by-side box girder. Figure 16 . Figure 16.Finite element model of cross-section of side-by-side box girder. Figure 16 . Figure 16.Finite element model of cross-section of side-by-side box girder.6.2.2.Influence of Solar Radiation on Temperature Distribution on Cross-Section of Side-by-Side Box GirderThe monitoring period on 31 July 2014 was a sunny day with high solar radiation.Comparisons of temperature-time curves obtained experimentally or from finite element model with or without the consideration of solar radiation are illustrated in Figure17(31 July 2014).The hollow points denote the measured data, the solid line denotes the calculated curves with the consideration of solar radiation, and the dashed line denotes the calculated curves without consideration of solar radiation.The temperature contour plots obtained from finite element model with or without the consideration of solar radiation are compared in Figure18(16:00, 31 July 2014).In Figure17, it can be observed that the variation trends and the times at which the highest temperatures occurred are similar for measured and calculated results on 31 July 2014.The differences between the highest measured and calculated temperatures when considering the effect of solar radiation were very small (maximum 0.5 °C for top flanges and 1.8 °C for webs).However, the highest calculated temperatures without the consideration of solar radiation were lower (maximum 9.3 °C for top flanges and 1.8 °C for webs) and had a time delay (2 h for top flanges).For bottom flanges, the differences between the highest measured and calculated temperatures with and without the consideration of solar radiation were very small (maximum 0.6 °C ).The accuracy of the solar radiation calculated by using the estimation models can meet the engineering requirements.In Figure18, it can be observed that the differences between the highest calculated temperatures with or without the consideration of solar radiation were significant at 16:00 on 31 July 2014 (maximum 10.3 °C ).The vertical temperature variation on the cross-section of side-by-side box girder when considering the effect of solar radiation (15.5 °C ) was significantly larger than that without the consideration of solar radiation (5.3 °C ) in summer.Therefore, the influence of solar radiation should be considered in the analyses of the temperature distribution on side-by-side box girders, especially for top flanges. 6. 3 . T-Shaped Girder Bridge 6.3.1.Finite Element Model One 4-span concrete continuous bridge in Yongchun with a cross-section consisting of 4 T-shaped girders was chosen as case study.The thickness of the concrete overlay is 190 mm.The dimensions of girder cross-section and the installation locations of temperature sensors are illustrated in Figure 19.Four temperature sensors were installed in the concrete within the top flanges (T-1), webs (1-W and 2-W) and bottom flanges (4-B) of the T-shaped girders.The meteorological data, including ambient air temperature, hourly global solar radiation, hourly diffuse solar radiation and wind speed, were measured by an automated meteorological station.The monitoring period were from 15 August 2012 to 25 August 2012 and the time increment between measurements was set as one hour. . The effects of solar radiation and convective and radiative heat transfer were considered at the external surfaces of the T-shaped girders.The surface heat transfer coefficient and the temperature of the surrounding fluid medium were input as boundary conditions for the finite element model.The influence of the solar radiation on different parts of the T-shaped girders was considered by dividing the cross-section into four zones, as shown in Figure 20.(a) The external surfaces of the top flanges are considered as Zone 1, which are influenced by the beam solar radiation and diffuse solar radiation.(b) The external surfaces of the webs of the girders #1 and #4 are considered as Zone 2, of which the parts in shadow are influenced by the diffuse solar radiation and ground reflection, and the parts not in shadow are influenced by the beam solar radiation, diffuse solar radiation and ground reflection.(c) The undersides of the top and bottom flanges are set as Zone 3, which are considered as the inner surfaces in the T-shaped girders and are not influenced by the solar radiation or ground reflection.Moreover, the convective heat transfer coefficient of the inner surfaces in the T-shaped girders was set as 5 W/(m 2 • C), which is larger than that in the box girders (3.5 W/(m 2 • C)) [62].The finite element mesh size was set as 0.02 m.There were 16,165 and 15,149 nodes and elements, respectively, as illustrated in Figure 20.Each node had a single temperature degree of freedom. Figure 20 . Figure 20.Finite element model of cross-section of T-shaped girder. Figure 20 . Figure 20.Finite element model of cross-section of T-shaped girder. ( 3 ) The Collares-Pereira and Rabl equation can estimate the hourly global solar radiation for different cities in Fujian based on the daily global solar radiation.The hourly beam solar radiation for different cities in Fujian can be predicted well using the Hottel equation.The hourly diffuse solar radiation for different cities in Fujian can be calculated by subtracting the hourly beam solar radiation from the corresponding hourly global solar radiation.(4) The maximum global solar radiation, beam solar radiation and diffuse solar radiation (for 21 June, the summer solstice) for 56 cities in Fujian were calculated.The maximum global solar radiation for Xiapu (about 1210 W/m 2 ) is the highest and the maximum global solar radiation for Anxi (about 970 W/m 2 ) is the lowest.The biggest difference among the global solar radiation maxima in different cities is about 240 W/m 2 .The maximum beam solar radiation for Zhouning (about 909 W/m 2 ) is the highest and the maximum beam solar radiation for Ningde (about 821 W/m 2 ) is the lowest.The biggest difference among the beam solar radiation maxima in different cities is about 88 W/m 2 .The maximum diffuse solar radiation for Xiapu (about 388 W/m 2 ) is the highest and the maximum diffuse solar radiation for Dehua (about 102 W/m 2 ) is the lowest.The biggest difference among the diffuse solar radiation maxima in different cities is about 286 W/m 2 .(5) Comparisons of the measured and calculated temperature-time responses for the concrete box girder, side-by-side box girder and T-shaped girder with or without the consideration of solar radiation indicates that the influence of solar radiation should be considered in the analyses of the temperature distribution on the bridge girder cross-sections in summer.The accuracy of the solar radiation calculated using the estimation models can meet the engineering requirements. Table 1 . Linear regression analysis of clearness index and sunshine duration fraction using different time scales. Table 2 . Linear regression analysis of clearness index and sunshine duration fraction using sunshine duration for four cities. Table 3 . Linear regression analysis of clearness index and sunshine duration fraction for eight cities. Table 2 . Linear regression analysis of clearness index and sunshine duration fraction using sunshine duration for four cities. Table 3 . Linear regression analysis of clearness index and sunshine duration fraction for eight cities.
15,482.6
2018-04-17T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Results of Gravity Observations Using a Superconducting Gravimeter at the Tibetan Plateau The tidal and nontidal gravity change characteristics in the Tibetan Plateau region were investigated using the continuous gravity measurements recorded with a superconducting gravimeter (SG) installed in Lhasa from December 8, 2009 to September 30, 2011. The results indicated that the precision of the tidal gravity observations with the SG in Lhasa was very high. The standard deviation of the harmonic analysis for the gravity tides was 0.498 nm s^(-2), and the uncertainties of amplitude factors in the four main tidal waves (i.e., O1, K1, M2 and S2) were better than 0.002%. In addition, the diurnal gravity tide observations clearly revealed a pattern of nearly diurnal resonance. As a result, it is affirmed that the station should act as a local tidal gravity reference in the Tibetan Plateau and its adjacent regions. The load effects of oceanic tides are so weak that the resulting perturbation in the gravimetric factors is less than 0.6%. However, the load effects of the local atmosphere on either the tidal or the nontidal gravity observations are significant, although no seasonal variations have been found. After removing the atmospheric effects, the standard deviation of the harmonic analysis for the gravity tides decreased obviously from 4.160 to 0.498 nm s^(-2). Having removed the load effects of oceanic tides and local atmosphere, it is found that the tidal gravity observations are significantly different from those expected theoretically, which may be related to active tectonic movement and extremely thick crust in the Tibetan Plateau region. In addition, the Earth's free oscillations excited by 2011 Tohoku-Oki Mw 9.0 Earthquake were successfully detected. InTROdUcTIOn Ground-based continuous gravity measurements are a combined reflection of the transportation and exchanges of material and the deformation of the Earth, which are related to all kinds of environmental perturbations and geodynamic processes. The superconducting gravimeter (SG) has the advatages of high stability and sensitivity, and extremely low noise and drift. Its precision is as high as 0.5 nm s -2 (1 nm s -2 is about one part of 10 10 of the mean gravity acceleration on the Earth's surface). It is now the best instrument to survey the temporal gravity variations in the world. It has the potential to detect almost all signatures with periods ranging from serveral seconds related to coseimic movements to several years related to the variations in the Earth's rotation, even the phenomena associated with the secular tectonic movements of local crust, such as the the Earth's free oscillations (Banka and Crossley 1999;Van Camp 1999;Lei et al. 2005;Park et al. 2005), the Earth's tides (Sun et al. 2001;Xu et al. 2004a), the load effects of barometric pressure (Sun and Lou 1998), the nearly diurnal resonance (Defraigne et al. 1994;Xu et al. 2002), translational oscilations of the solid inner core (Smylie 1992;Courtier et al. 2000;Rosat et al. 2003;Xu et al. 2010), Polar motion (Loyer et al. 1999;Xu et al. 2004b), secular crust deformation due to earthquakes or other reasons Imanishi et al. 2004;Richter et al. 2004;Xu et al. 2008) and so on. As a result, a significant scientific project, i.e., the Global Geodynamics Project, has been carried out since 1997 in order to investigate global and local dynamic problems using continuous gravity data from a worldwide network of superconducting gravimeters Crossley 2004;Crossley and Hinderer 2009). This project has produced numerous scientific achievements and benefited many disciplines. The Tibetan Plateau is located on the collision region of the Indian and Eurasian Plates. It is the youngest orogen, and is the largest and highest plateau in the world. It is referred to as the Earth's third pole. Since the early 20 th century, many geodetic strategies, including trigonometrical surveys, arc measurements, leveling surveys, GPS measurements and gravity measurements, have been carried out by numerous international research institutes in order to gain knowledge of the present-day crustal deformation and movement status. The geodetic measurements have resulted in abundant information and basic data for studies on the mechanisms of the tectonic deformation in the plateau and led to some significant results (Ma et al. 2001;Wang et al. 2001Wang et al. , 2004Xu 2001;Zhang et al. 2002;Sun et al. 2009). Lhasa is located on the southern part of the Tibetan Plateau, on the northern side of the Himalayan Mountains and on flat land in a valley in the middle reaches of the Lhasa River, a tributary of the Brahmaputra. A permanent station of continuous gravity measurements was set up in Lhasa by the Institute of Geodesy and Geophysics, Chinese Academy of Sciences at the end of 2009 in order to investigate the related geodynamical hotspots such as Tibetan Plateau's formation, evolution, uplifting rate and the related dynamic mechanism. An SG, coded as C057 was installed at the Lhasa station to monitor the continuous local gravity variations. Figure 1 shows the location of the Lhasa station and the tectonic environment and surrounding region. The characteris-tics of the tidal gravity changes in Lhasa were investigated using continuous gravity measurements over more than one year recorded with the SG (Xu et al. 2012). The main motivation of this study is to investigate the tidal and non-tidal gravity changes, including accurate determinations of the gravimetric parameters, the load effects of oceanic tides and local atmosphere in the Tibetan Plateau, the Earth's free oscillations, and the nearly diurnal resonance in the diurnal tidal gravity observations. InSTAllATIOn And PRePARATIOn The geographical coordinates of the Lhasa station are 29°.645 for latitude and 91°.035 for longitude, its altitude is 3632.3 m. The station includes the SG measurement room, the SG monitoring room and a room for the contrast gravity measurements, shown in Fig. 2. Two square observation piers with side lengths of 1.2 m were built in the inter-comparison measurement room for convenient contrast observations of a relative or absolute gravimeter with the SG. The SG was installed on an equilateral triangle pier with side lengths of 77 cm. All of the observation piers were concreted to a depth of 1 m, fixed to a 30cm-thick bedding base and separated from the surroundings with a 10 cm wide slot to avoid perturbation from environmental noise, as shown in Fig. 3. A computer equipped in the monitoring room connects the SG through a cable and controls the SG and its related accessories by setting the associated parameters via the control system. The working status of the instrument can be monitored through a real-time display of pictures and related parameters. The data, including the gravity, baromet- ric pressure and temperature data are adopted automatically and stored by the computer. The control system also allows remote monitoring, control and data transfer to Wuhan via the Internet. Like a spring gravimeter, the SG is also a relative gravimeter, and must be calibrated accurately. In order to determine accurately the SG scale factor, as well as normalize the tidal gravity observations to the international tidal gravity reference at Wuhan (Xu et al. 2000), a high-precision spring gravimeter, LaCoste-Romberg (LCR) ET20, was installed simultaneously in the contrast gravity measurement room to carry out parallel observations with the SG because it had worked perfectly at the Wuhan station for a long time. Af-ter primary preprocessing and harmonic analysis of the data recorded simultaneously with SG-C057 and LCR-ET20, the gravimetric amplitude factors of the main tidal waves were estimated. Using the observed amplitude factors of tidal waves O 1 , K 1 and M 2 from the two gravimeters, the scale factor of SG-C057 was accurately determined as -777.358 ± 0.409 nm s -2 V -1 which is about 2.2% less than the value provided by the manufacturer (i.e., -795 nm s -2 V -1 ). The relative precision of calibration was as high as 0.05%, which completely satisfied the requirements for high-precision continuous gravity measurements (See Chen et al. 2012 andXu et al. 2012 for details). Software package T-Soft (Vauterin 1998), recommended by the International Center of the Earth's Tides for the analysis of Earth tide data, was employed to preprocess the gravity data. Using an interactive remove-restore technique, disturbances such as spikes, steps, off-sets, vibrations due to strong earthquakes and so on, were graphically removed and then corrected. Some short-time gaps due to happenstances such as sudden interruption of the electricity supply and instrument failure were interpolated in polynomials or spline functions. A 1s-sampled data series was transformed into a 1h-sampled one using a low-pass digital filter and presented in Fig. 4a. TIdAl GRAvITy ObSeRvATIOnS The SG is regarded as the best technique to investigate the nature of local tidal and nontidal gravity changes. Eterna3.30, a standard harmonic analysis software package (Wenzel 1996), was used. The gravimetric parameters (i.e., amplitude factor d and phase difference { D ) were accurately determined and tabulated in Table 1. In harmonic analysis, a high-precision tide-generating potential, HW95, developed by Hartmann and Wenzel (Hartmann and Wenzel 1995) was used. The analysis results indicated that tidal gravity observation precision is very high. The standard deviation of the harmonic analysis was as little as 0.498 nm s -2 , the gravimetric parameters (d, { D ) of the four main tidal waves were accurately estimated as (1.16760 ± 0.00006, -0°.0173 ± 0°.0028) for O 1 , (1.14166 ± 0.00005, 0°.0636 ± 0°.0022) for K 1 , (1.16940 ± 0.00003, -0°.4630 ± 0°.0012) for M 2 and (1.16374 ± 0.00006, -0°.6606 ± 0°.0044) for S 2 , where the estimation precision of the amplitude factors was better than 0.006%. For all the other tidal waves with amplitudes exceeding 20 nm s -2 , it was better than 0.05% also. It implies that the tidal gravity observations from the SG were so accurate that they can be regarded as a regional tidal gravity reference in the Tibetan Plateau and its surrounding regions. The dominant component in the observed residuals of the gravity tides arises mainly from the global and local oceanic tide load (OTL) effects in most areas of the world. Based on the classical theory of the Earth's surface loads (Farrell 1972) and available oceanic tide models, the load vectors L(L, λ) of all of these tidal waves were calculated using integrated Green's functions (Agnew 1997), which can be written as where , 0 0 i ẑ h and , i ẑ h are the colatitude and longitude of the station and integration element, respectively, is the vector of the oceanic tide (including the height H and phase {) provided in the oceanic tide models, } is the angular distance from the station to the integration element and G } h is Green's functions for gravity changes excited by surface loads, which can be expressed as a linear combination of the load Love numbers. In this study the recent global model of oceanic tides, FES04, deduced from the Topex/Poseidon altimeter data (Lefevre et al. 2002) was used. The model contains the cotidal maps of four diurnal and four semidiurnal tidal waves (i.e., Q 1 , O 1 , P 1 , K 1 , N 2 , M 2 , S 2 and K 2 ) with a spatial resolution of 0.125° × 0.125°. The loading vectors were obtained and presented in Table 2. The numerical results indicate that the tidal gravity observations were only slightly influenced by the OTL because Lhasa is located on the inland plateau far away from the oceans. Among all tidal waves, it is M 2 whose amplitude of the OTL vector is the largest, which is 4.205 nm s -2 , but only about 0.6% of its observed amplitude in gravity tides. In order to clarify the OTL effects on the tidal gravity observations, an accurate theoretical model of the Earth's tides, where the inelasticity of the media in the mantle, mantle convection and excited deformation of the mantle boundaries were taken into account (Dehant et al. 1999), was used as a reference and denoted as DDW99. The observed residual vector B(B, β) was obtained through subtracting the theoretical vector from the observed vector for each tidal wave, and the difference of the observed residual vector B and the OTL vector L is defined as the final residual vector X(X, χ). The relationship of these vectors is illustrated in Fig. 5. All of the related results of the OTL effects on the tidal gravity observations made with the SG are tabulated in Table 2. After removing the OTL effects, the amplitudes of the final residual vectors X of all tidal waves decreased slightly except for P 1 and K 1 . The OTL only induced very little perturbation in the gravimetric parameters. The perturbation in the gravimetric parameters excited by the OTL was less than 0.6% for the tidal waves with amplitude exceeding 20 nm s -2 . Having removed the oceanic tide load effects, the gravity tides amplitude factors only changed slightly and were slightly closer to the corresponding values in the theoretical and experimental models for the gravity tides (Dehant et al. 1999;Xu et al. 2004a). However, the observed amplitude factors for the di urnal tidal waves were significantly about 0.7% larger than those from DDW99, while the observed amplitude factors for the semidiurnal tidal waves were about 0.6% larger than those from DDW99. Compared with the tidal gravity observations from the SG at the inland stations in the other regions (Xu et al. 2004a), the differences between the observed and theoretical amplitude factors are too large, which cannot be explained by the oceanic tide load effects. The other large-scale anomalies in structure and perturbations may be responsible for the differences. In the Tibetan Plateau area, the largest local perturbation is due to variations in the glaciers covering the plateau. However, glacier ablation and its associated rebound are slow processes and hardly affect the tidal deformation of the Earth. As a result, it is acceptable that the relatively large differences between the gravimetric amplitude factors observed with SG-C057 at the Lhasa station and those expected theoretically might be due to the combined contribution of the active tectonic movement of the Tibetan Plateau and its surrounding areas and the regional extremely thick crust. Of course, a more certain conclusion cannot be drawn until SG data are accumulated for a longer period and further associated theoretical studies are carried out. nOn-TIdAl GRAvITy chAnGeS And The RelATed dynAmIcAl ImPlIcATIOn The nontidal gravity changes, i.e., the gravity residuals, were deduced by subtracting the tidal gravity signatures from the original gravity variations and were depicted in Fig. 4b. It is found that there were abundant signatures of high and moderate frequencies in the non-tidal gravity variations in addition to secular tendency and long-period changes which should be related to the instrument drift, the local vertical crust movements, variations in the Earth's rotation and the regional hydrological variations. The barometric pressure load effects are the largest noise source in the tidal and nontidal gravity observations except for the OTL. The measurement results, presented in Fig. 4c, indicated that there were neither obvious seasonal nor annual variations in the local atmospheric pressure unlike the cases in the other areas with low or moderate latitudes. Instead, the pressure changes mainly concentrated on the short-period perturbations and their magnitude arrived at as large as about 30hPa. In the period from June to September the predominant components of the local barometric pressure variations are high-frequency vibrations with relatively smaller magnitudes, while the frequencies became lower and the magnitudes became larger in the remaining periods. The atmospheric gravity admittance was estimated as -3.631 ± 0.007 nm s -2 hPa -1 at the Lhasa station and was not significantly different from those either predicted by theoretical simulation or obtained with the SGs at the stations in other areas (Sun and Lou 1998;Sun et al. 2001;Xu et al. 2004aXu et al. , 2008Xu et al. , 2012. In order to intuitively show the local barometric pressure load effects upon the gravity measurement, Fig. 4 shows the nontidal gravity variations measured with the SG (Fig. 4b) and those after removing the atmospheric gravity signatures (Fig. 4d) at the Lhasa station. It was found that almost all obvious gravity disturbances were related to the local atmospheric pressure load effects. Compared with the power spectral density of the gravity residuals before removing the atmospheric effects, it was found that the pressure considerably influenced the gravity measurements in each frequency band. Nearly all vibrations with periods from 3h to several days disappeared and the gravity residuals became much smoother and quieter after effects removal. The harmonic analysis results indicate that while the atmospheric effects had been removed, the standard deviation significantly decreased from 4.160 to 0.498 nm s -2 , the noise levels decreased from 0.2093, 0.0668, 0.0375, and 0.0264 nm s -2 to 0.0194, 0.0177, 0.0095, and 0.0062 nm s -2 in the diurnal, semidiurnal, terdiurnal and quarter-diurnal frequency bands, respectively. The white-noise level also decreased from 0.0717 to 0.0070 nm s -2 . Previous studies showed that due to the interaction between the elliptical fluid core and the solid mantle, the core moves with a nearly diurnal free wobble (NDFW) relative to the whole Earth and behaves as a free core nutation (FCN) in inertial space, which leads to resonant enhancement in the diurnal gravity tides and gravimetric amplitude factor observations of the diurnal tidal waves depending on their frequencies. The nearly diurnal resonance is an important characteristic of the tidal gravity observations (Defraigne et al. 1994;Dehant et al. 1999;Xu et al. 2002Xu et al. , 2004a. Using tidal gravity observations with SG-C057 at the Lhasa station, the nearly diurnal resonance of the gravimetric parameters were fitted and shown in Fig. 6, and the FCN period was retrieved as 450.5 ± 8.6 sidereal days which is slightly longer than those retrieved by stacking the tidal gravity observations from global SGs (Defraigne et al. 1994;Xu et al. 2002Xu et al. , 2004a. Meanwhile, the retrieved quality factor was negative. The main reason is tiny disturbance in the gravimetric parameters related to the active tectonic movement of Tibetan Plateau and its adjacent areas and the regional extremely thick crust, in addition to uncertainty in oceanic tide models in the Indian Ocean. The Earth's free oscillations (EFO) can be excited by a great earthquake. The EFO are classified as spheroidal and toroidal oscillations, and the former are associated with cubical dilation and gravity changes which can be detected by the high-precision gravimeters installed on the ground. It was proven that the SG played an important role in structuring the long-period seismogram (Banka and Crossley 1999;Van Camp 1999;Lei et al. 2005;Park et al. 2005). On March 11, 2011, the magnitude 9.0 Tohoku-Oki Earthquake released such large energy that the Pacific tsunami was generated and the EFO were excited. The Lhasa station is located Fig. 6. Power spectral density of gravity residuals from the SG at the Lhasa station before (black) and after (red) removal of barometric pressure effects. on the Tibetan Plateau where many small earthquakes near the station were induced by the event due to active tectonic movements and the complicated tectonic environment. However, the SG successfully caught the signatures related to the EFO excited by this event although the mean level of background noise spectrum was a little higher than 1 (nm s -2 ) 2 Hz -1 in frequency less than 0.5 mHz. The results indicate that the signal-to-noise ratio (SNR) was still larger than 5.0 for modes 0S2 and 0S3, and higher than 10 for other EFO modes except 1S2 whose SNR was also beyond 3.0. The spectral peaks nearby 0.945 mHz were generated by 1S3, 2S2 and 3S1 modes, but they cannot be accurately distinguished at present. This is the first time that the EFO modes were observed at a plateau station. The spectral peaks of the EFO modes with frequencies less than 1.5 mHz were presented in Fig. 7. cOnclUSIOn Gravimetric parameters were accurately determined using the SG data recorded at the Lhasa station from December 8, 2009 to September 30, 2011. The standard deviation was as little as 0.498 nm s -2 . The precision was better than 0.002% for the four main tidal waves and better than 0.02% for the other tidal waves with ampitudes exceeding 20 nm s -2 . This implies that the tidal gravity observations made with the SG at the Lhasa station were so accurate that they can act as a regional tidal gravity reference for the gravity measurements in the Tibetan Plateau and its surrounding regions. The OTL vectors were obtained based on the classical theory of the Earth's surface loads and global oceanic tide model Fes04. It was found that the OTL effects on the tidal gravity observations made with the SG at the Lhasa station were very tiny. The OTL resulted only in slight perturbations in the observed gravimetric parameters. After removal of the OTL effects, the difference in the observations and the recent theoretical model was still significantly as large as 0.7% for the diurnal gravity tides and about 0.6% for the semidiurnal gravity tides. In other words, the OTL cannot reasonbaly explain so large a difference, instead, the active tectonic movements and extremely thick crust in the areas surrounding the station may be responsible for the differences. For the same reason, the FCN period retrieved from the SG observations obtained at the Lhasa station was slightly longer than the one retrieved by stacking the tidal gravity observations from global SGs. There were abundant signatures in the non-tidal gravity variations recorded with the SG at the Lhasa station. The numerical results indicate that there were neither obvious seasonal nor annual variations in the local atmospheric pressure. Almost all of the obvious gravity disturbances were related to the local atmospheric pressure load effects. The pressure considerably influenced the gravity measurements in each frequency band. After removing the atmospheric effects all of the high and moderate frequency vibrations disappeared and the gravity residuals became much smoother and quieter. In addition, the Earth's free oscillations excited by the 2011 Tohoku-Oki M w 9.0 Earthquake were successfully detected by the SG at the Lhasa station. This is the first time that the EFO modes were observed at a plateau station. With the SG data accumulation, combining the continuous GPS data and repeated FG5 measurements, it is hopeful that some hotspots in continent dynamics in the Tibentan Plateau will be investigated further. Fig. 7. EFO Spectrum excited by Japan great earthquake at Lhasa station.
5,219.2
2013-08-01T00:00:00.000
[ "Geology", "Physics", "Environmental Science" ]
GAME-INFORMED MEANING MAKING IN U.S. MATH CLASSES This article features data from a larger, ongoing eight-year study involving game-informed learning in public high school math classes in the Northeastern United States. More specifically, the focus on coop- erative competition and assessment reveals how specific principles of gaming, namely discovery, reflexivity, contextual understanding, and sharing , can support the development of students’ literacies and nu- meracies. Furthermore, this article addresses how game-informed teaching and learning can be applied to L1 classroom. INTRODUCTION Videogames-and the social aspect of digital and nondigital game play-can inform pedagogy and practice (Abrams, 2017;Alberti, 2008;Bacalja, 2018;Hayes & Duncan, 2012;Squire, 2011). Focusing on nondigital experiences, this article explores how a game-informed approach to testing, which involved high school students working together to solve math problems, helped tenth and eleventh graders to develop their literacies and numeracies. Game-informed learning, which hinges on the ethos of gaming and engaged learning (Begg et al., 2005), created a space for students to persevere through academic challenges vis-à-vis discovery, reflexivity, contextual understanding, and knowledge sharing-features of videogaming that can support a much-needed shift in an assessment culture that has proven to be problematic not only in the United States, but also across the globe. International research specifically highlights how anxiety has been negatively impacting students' achievement in science, math, and reading (OECD, 2017). Such anxiety has been associated with poor academic performance (Luttenberger et al., 2018;Namkung et al., 2019;OECD, 2017), and additional international research (Rege et al., 2021) suggests that adolescents "fail to embrace [challenging] learning opportu-nities…mak[ing] them less well-prepared for the realities of the current and future global economy" (p. 5). It is not surprising that there is a call to action to "find ways to alleviate the fears that individuals face as they confront intellectual chal-lenges…[and to] craft classrooms and workplaces that communicate that challenge is a route to learning" (Rege et al., 2021, p. 27). A possible solution includes cooperative competition wherein students confront challenges together. Extant research suggests that cooperative and collaborative learning and assessment help to improve academic performance, to enhance individual and collective knowledge, and to reduce anxiety (Abrams, 2021a;Breedlove et al., 2004;Cortright et al., 2003;Duane & Satre, 2014;Hendrickson et al., 1987;Kapitanoff & Pandey, 2018;Ngotngamwong, 2014;Rao et al., 2002;Singer, 1990;Slusser & Erickson, 2006;Zengin & Tatar, 2017). The investigation featured in this article contributes to this line of research and responds to Rege et al.'s (2020) call, highlighting the inclusion of game-informed cooperative testing in which academically struggling adolescents persevered and completed challenging problems together. Although data are specific to the math classroom, the pedagogical approach is not discipline-specific, and implications for L1 classes are addressed. EXTENDING TRADITIONAL BOUNDARIES: A FOCUS ON MEANING MAKING Math instruction notoriously has been connected to arithmetic principles and computational practices. The international examination of proficiency, the Programme for International Student Assessment (PISA), offers math assessments that focus on three primary categories-mathematics, problem solving, and financial literacyfurther emphasizing information that "can be represented mathematically (e.g., comparing the total distance across two alternative routes, or converting prices into a different currency)" (OECD, 2019, p. 3). In the United States, the National Assessment of Educational Progress (NAEP) has a similar focus on computation and problem solving (NAEP Report Card: Mathematics, n.d.). Even though the NAEP included a caveat that its objectives "should not be interpreted as a complete description of mathematics that should be taught at these grade levels" (National Assessment Governing Board, 2019, p. 7), assessment often impacts instruction, and "the ways in which numeracy teaching and learning is enacted often reflects and reinforces narrow conceptions of what constitutes 'numeracy' (see for example, Baker, 1998, Coben et al., 2003" (Yasukawa et al., 2018, p. 4). In other words, traditional mathematics instruction and assessment specifically might not include socially situated meaning making. Such a limited, traditional understanding also has been true for literacy, which historically has been focused on reading and writing alphabetic texts. Although there is merit in examining traditional literacy and numeracy, late 20 th century conceptual shifts have supported a movement away from solely skill-based, alphabetic, or numeric understandings of literacy and numeracy to support expansive definitions of literacies and numeracies, both which include socially situated ways of making meaning inherently shaped by context and culture (Barton, 1994(Barton, , 2001Gee, 1996Gee, , 1999Street 1984Street , 1995Street , 1999. This conceptual shift has paved the way for more nuanced views of meaning making, including the discussion of multiliteracies (Cope & Kalantzis, 2000;Kalantzis & Cope, 2012;New London Group, 1996), which recognizes the social and multimodal nature of learning and communicating while valuing "what still matters in traditional approaches to reading and writing, and to supplement this with knowledge of what is new and distinctive about the ways in which people make meanings in the contemporary communications environment" (Kalantzis & Cope, 2012 p. 1). In other words, meaning making encompasses all experiences-old and new-and traditional notions of literacy are not abandoned; they are expanded. Likewise, the theory of multimodalities (Jewitt, 2003;Kress, 2003Kress, , 2010Kress & Van Leeuwen, 2001) extends the concept of "text" to include a variety of modes (e.g., sound, video, movement, image), valuing a wide range of literacy experiences. For L1 educators, this understanding of literacies creates opportunities to look beyond traditional reading and writing and to honor students' experiences with and understandings of multimodal texts, including those of other disciplines. In this study, the expansiveness of literacies also applies to the math classroom wherein mathematics is not seen solely as a "specialized and abstract set of practices" that involves numbers and arithmetic (Street & Baker, 2006, p. 220); rather, like literacies, the concept of numeracies involves socially situated interpretations and practices, but with a particular focus on mathematical concepts and thinking (Baker et al., 2001). For instance, students can develop their numeracies when a teacher perceives problem solving as a social practice and students discover multiple ways to solve a problem by interacting with classmates, by drawing upon various digital and nondigital resources, and by engaging in formal and informal discussions about their mathematical experiences, perceptions, and understandings. Learning from peers and adopting "a social view questions the assumption of universality built into many accounts of mathematics… [and] challenges the top down view of learning" (Street & Baker, 2006, p. 222). Going one step further, this study investigates how adolescents negotiate meaning when taking a test together in their math class, and findings shed light on not only how students make meaning together, but also how such practices can be applied to the L1 classroom. Additionally, the literacy-numeracy connection has included a focus on numeracy events and practices that exist beyond school (Tomlin et al., 2002) and the connection between how students develop mathematical understandings at home and at school (Baker et al., 2001Street et al., 2008). Given that literacies are tied to and embedded in sociocultural experiences, the literacy-numeracy connection becomes equally salient, especially since, like literacy practices, "numeracy practices include the conceptualizations, the discourses, the values and beliefs and the social relations that surround these activities as well as the context in which they are sited" (Baker et al., 2001, p. 43). In other words, numeracies represent socially situated meaning making with mathematical concepts, resources, value systems, and sign systems, thereby highlighting various ways to represent knowledge and understanding. This is especially important because people have different epistemic frames and communicative practices, and when the focus moves away from school-based formalities for verbal and nonverbal meaning making (e.g., speaking and behaving in one "right" way), then there can be space to witness and nurture the depths of student learning (Baker et al., 2001Street, 2005). The expansiveness of numeracies-meaning making that extends beyond formal computation and arithmetic to include socially and culturally situated values for and understandings of "mathematical ideas" (Baker & Street, 2004, p. 20) or concepts (e.g., problem solving)-undergirds the discussion of practices addressed in this article. Furthermore, because the study featured in this article is grounded conceptually in the ethos of videogame play, in what follows is an overview of videogames and learning, with a specific focus on four overarching categories used to identify student meaning making during cooperative math assessments. VIDEOGAMES AND LEARNING: CONCEPTUALLY GROUNDING THE DISCUSSION In the last 20 years, scholars have identified how features of videogaming can relate to student learning and engagement (e.g., Annetta, 2008;Bacalja, 2022;Boyle et al., 2012;Clark et al., 2016;de Smale et al., 2015;Enenfeldt-Nielsen, 2006;Gee, 2003;Hanghøj, 2022;Hawisher & Selfe, 2007;Schaffer et al., 2005;Squire, 2006). In addition to empirical research addressing how videogame play can support agentive learning and offer relevant contexts for academic material (e.g., Abrams, 2009Abrams, , 2010Gerber & Price, 2011;Squire, 2011;Wainwright, 2014), scholars also have presented ways to classify videogame play and the types of thinking and behavior that gaming requires and hones (e.g., Gee, 2003;Gikas & Van Eck, 2004;Lee et al., 2005;Sutton-Smith, 1997). For instance, to address how each type of videogame genre related to types of comprehension, Gikas and Van Eck (2004) juxtaposed Bates's (2001) taxonomy of games (i.e., action, role playing, adventure, strategy, simulations, sports, fighting games, casual, god games, education games, puzzle games, and online) with both Gagne et al.'s (1992) Capabilities and Bloom's (1984) Taxonomy (see also Van Eck, 2007). As an example, Gikas and Van Eck (2004) noted that strategy games included problem solving, higher order rules, defined concepts, concrete concepts, and discriminations, which are five of Gagne et al.'s (1992) Capabilities. Additionally, engagement in strategy games typically includes evaluation, synthesis, analysis, application, comprehension, and knowledge, which are aspects of Bloom's (1984) Taxonomy. Gee (2003) also contended that there are 36 ways one can learn from playing a "good" videogame, which has "good principles of learning built into its design [and] facilitates learning in good ways" (p. 6). Although earlier research into videogames includes discussions of learning via game play (e.g., Hawisher & Selfe, 2007;Schaffer et al., 2005;Squire, 2006), Gee's work-which conceptually is rooted in his understandings of literacies and social semiotics-makes it appropriate for this study, which explores students' literacies and numeracies, or the ways students make meaning in light of "context, values and beliefs, [and]…social relations" (Street et al., 2008, p. 17). Whereas Gee (2003) noted each of the 36 principles separately, I have organized and subsumed these principles into four overarching categories: (1) discovery, (2) reflexivity, (3) contextual understanding, and (4) sharing (See Table 1). Across these four categories is Gee's first and foundational principle, "Active, Critical learning," (p. 207), because it reinforces the notion that players are actively, not passively, engaging with the text(s) at hand. Although math classes involve forms of L1 learning visà-vis the interaction with various semiotic systems, the explanations that follow, as well as those featured in Table 1, include examples of each category in relation to cooperative testing in the math classroom (Abrams, 2016(Abrams, , 2017(Abrams, , 2018(Abrams, , 2021a and to the L1 classroom setting. The first overarching category, discovery, involves learning and advancement through challenge, practice, trial-and-error, and rewards and motivation. With regard to literacies/numeracies, discovery learning includes learning-by-doing (Dewey, 1916) and appears in a number of ways, such as when students work on a mathematical problem and notice different pathways to a solution. In the L1 classroom, discovery learning stems from various approaches, including, but not limited to, experimenting with writing styles and genres, exploring the social, historical, and political context of written work, and discussing how and why authors might positionand readers interpret-the characters and content. Working together in dyads or in groups to achieve a particular objective and/or task Relatedly, reflexivity in research includes an awareness of one's contribution to the exploration and discovery of meaning (Faulkner et al., 2016). With regard to teaching and learning, Wilhelm (2013) explained that reflexivity involves "privileging the perspective, history, and values of others" (p. 57), and, underscoring that culturally situated behavior is representative of values and power systems, Bolton and Delderfeld (2018) claimed, "to be reflexive involves thinking from within experiences… working out how our presence influences knowledge and actions" (p. 10). The authors argued that reflexivity can help to support explorations of why and how one does and does not perceive information and how and why one's actions might be perceived by others. Despite efforts to describe reflexivity and call for practice that involves it, Bolton and Delderfeld (2018) also contended that reflexivity is the near-impossible adventure of making aspects of the self strange: attempting to stand back from belief and value systems and observe habitual ways of thinking and relating to others, structures of understanding ourselves, our relationship to the world, and the way we are experienced and perceived by others and their assumptions about the way that the world impinges upon them. (p. 10) Although a full discussion and critique of reflexivity extends beyond the scope of this article (see, for example, Alexander, 2017), it is important to note that reflexivity is a process. In light of the discussion of literacies and numeracies, reflexivity might stem from a number of practices, including, but not limited to, students providing, receiving, and contemplating peer review and feedback. Given that literacies and numeracies are developed through experience-based interpretations and sociocultural contexts, it stands to reason that, in retrospect and in-the-moment, how students think about the material, their own understandings, and their classmates' understandings-as well as how they interpret the semiotic domain-is pivotal to their learning. A similar understanding is applied to the category, contextual understanding, which includes players acknowledging and learning about resources, objects, tools, symbols, texts, technologies, and environments. In order to engage successfully in playing a game, one needs to have contextual understanding because, with such recognition and learning, one can interpret and interact with the game in meaningful ways. The same is true in non-game settings wherein understanding the context and disciplinary content-be it the culture of the classroom or the material on the testis important to succeeding. Additionally, sociocultural understandings and meaning making across modalities are inherent aspects of contextual understanding, possibly materializing in ways students interpret and reinterpret text, symbols, and drawings. In math class this might look like students discussing the purpose and application of a geometric figure or a mathematical formula, and in L1 classrooms, students might show evidence of contextual understanding when they distinguish the nuanced function of punctuation in their writing or the presence and function of literary allegories. Finally, the aspect of sharing involves players relaying information and knowledge to others, perhaps as a means to teach others, learn from others, and/or become part of a social activity or group. In his introduction to What Video Games Have to Teach Us about Learning and Literacy, Gee (2003) explained that the purpose of his book was to "talk about what it means to discover patterns in our experiences and what it means to be "networked" with other people and with various tools and technologies" (p. 8). In the classroom, such networking might exist in group work or, depending how it is structured, whole-class scenarios. Overall, features of videogame play and of meaning making that can occur during game play also can be applied to non-game scenarios. In this article, the four overarching categories related to game play-discovery, reflexivity, contextual understanding, and sharing-support the examination of learning that takes place during a game-informed testing situation. Before addressing game-informed learning, I turn to discuss different ways that classroom teaching and learning have been identified and labeled according to game-related approaches. GAMING, LEARNING, AND DEFINING TERMS Various terminology has been used to describe gaming and game-related activities in a classroom. More specifically, labels, such as game-based, gamification, gameful, game-inspired, and game-informed, all have made their way into the studies of meaning making that includes digital and nondigital gaming and/or features of such practices. In what follows is a discussion of each of these terms, as well as a rationale for why this article focuses on game-informed learning to address the types of meaning making that occurred during cooperative testing scenarios. Game-based learning Definitions of game-based learning (GBL) vary. Systematic reviews of research include GBL (a) on its own (Jabbar & Felicia, 2015), (b) in relation to online learning environments (Tsai & Fan, 2013) and virtual worlds (Pellas & Mystakidis, 2020), (c) associated with digital technologies or digital game-based learning (DGBL, Chang & Hwang, 2019), and (d) applied to specific environments, such as Augmented Reality (ARGBL, Pellas et al., 2019). Respectively, definitions of GBL in these systematic reviews include learning (a) that occurs in digital or nondigital game worlds (Jabbar & Felicia, 2015); (b) that includes "any initiative that combines or mixes video games and education" (Tsai & Fan,p. 115), as well as students-as-players who, through digital game play, problem solve and "develop cognitive thinking and practical skills to improve their learning outcomes" (Pellas & Mystakidis, 2020, p. 1018); (c) that "incorporates educational content or learning designs into digital games" (Chang & Hwang, 2019, p. 69); and (d) that features aspects of play, strategy, and games-asengagement tools (Pellas et al., 2019). Furthermore, a systematic review (Gris & Bengston, 2021) of other reviews of GBL research revealed that a number of studies (14 that were noted) addressed GBL in relation to the use of games or game design for learning and achievement and for motivation and engagement. Additionally, research of GBL has included collaborative approaches (CGBL) that involve groups sharing knowledge, engaging in reflective and critical thinking, and collectively solving problems (Abrams, 2017;Chen et al., 2015;Shih et al., 2010). Still, too, there is an arm of GBL, game-based teaching and learning (GBTL), that focuses on "games and the features of games…being used to inspire innovations in teaching and learning" (Holmes & Gee, 2016, p. 2); however, "the diversity of instructional strategies and technologies associated with games make it difficult to identify GBTL as a unitary educational practice" (Holmes & Gee, 2016, p. 3). Furthermore, as Whitton (2012) noted, GBL extends beyond being "simply about using games to teach…but as artefacts to be studied and from which to learn" (p. 252). Across these studies, what comes to the fore is that GBL involves a game, gaming, or game design in some way, or, as Plass et al. (2015) explained, "definitions of game-based learning mostly emphasize that it is a type of game play with defined learning outcomes. Usually it is assumed that the game is a digital game, but that is not always the case" (p. 259). Although beyond the scope of this manuscript's focus, what seems necessary is a unified definition of GBL that will help researchers and practitioners speak about it in consistent ways. Gamification Equally nebulous is the concept of gamification. Plass et al. (2015) aptly noted that "what exactly is meant by gamification varies widely, but one of its defining qualities is that it involves the use of game elements, such as incentive systems, to motivate players to engage in a task they otherwise would not find attractive" (p. 259). Lee and Hammer (2011) explained gamification a little differently, contending that the concept exists on a continuum with rewards and game-related features on one end and, on the other end, curricula and pedagogy informed by game design and game principles. This continuum also calls attention to the various ways gamification has emerged in the literature, from "game thinking and game mechanics to solve problems and engage audiences" (Zichermann & Cunningham, 2011, p. ix) to adaptive learning in the classroom (Abrams & Walsh, 2014) to "the use of game design elements in non-game contexts" (Deterding et al, 2011, p. 10). Furthermore, Kapp (2012) identified nine features of gamification: "game-based," "mechanics," "aesthetics," "game thinking," "engage," "people," "motivate action," "promote learning," and "solve problems" (pp. 9-12). Kapp's inclusion of GBL as the first feature of gamification highlights just how tangled the definitions have been and how blurred the boundaries between GBL and gamification can be. Gameful and game-inspired learning Added to the mix are the concepts of gameful learning (or gamefulness) and gameinspired learning. McGonigal (2011) explained that acting like a gamer is "to be a truly gameful person" (p. 27). Deterding et al. (2011) argued that "'gamification' calls attention to the phenomena of 'gamefulness,'" (p. 9), and, even though Brunvand and Hill (2018) focused on features of game play, they still equated gamification with "the creation of gameful experiences" (p. 58). Aguilar et al. (2015) also connected gameful learning to gamification, noting that their work "extends gamification with a reimagining of the fundamental structure of classroom assessments," a process which they call "gameful design" (p. 2). Despite some roots in gamification, Holden et al. (2014) explained that gameful learning is but "one interpretation of gamebased learning," and that the difference between the two is that gameful learning "serves as inspiration for other practitioners' literal and figurative play, rather than a prescriptive construct to be reified" (p. 184). What is more, Holden et al. (2014) noted that a gameful learning framework involves "synthesizing multiple influences into a teaching and learning 'way of being' with games, digital media, and play," which "includes three overarching elements: attitude, identity, and ignorance" (p. 185). In other work, the emphasis of gameful learning appeared to be on teaching: "a conception of gameful learning is advanced to describe educators committed to playfulness, design, and agency within game-based teaching and learning" (Kalir, 2016, p. 359). Confounding the definition of terms and the nuances that scholars use to distinguish them (e.g., flexibility versus constriction, Holden et al., 2014), there also is the idea of game-inspired learning. In their work about game-inspired design, Aguilar et al. (2015) looked to gameful learning and "the use of games as inspiration for changes to the type and structure of tasks given to learners, with the goal of better supporting intrinsic motivation" (p. 2). Although the authors did not directly define the term, game-inspired, they called on Gee's (2003) game principles to explain how students can be co-designers of a course, with a focus on the grading system. In related work, Holman et al. (2013) explained that game-inspired learning specifically acknowledges that "similarities that commonly exist between games and school include well-defined goals at the outset, the establishment of specific challenges to be conquered, requiring practice to succeed, and using assessments to gauge whether material has been properly learned. These parallels led to the question of whether school itself could be made into a good game" (p. 260). Across these various terms-GBL, gamification, gameful, and game-inspiredthe game, or the aspect of creating a game or a representation of one-is central to the concept. Perhaps this is because, as Whitton (2012) noted, "all games, digital and traditional, naturally embody a range of techniques that help to create effective learning experiences…The use of games can be an excellent way to support constructivist pedagogies through active learning and participative teaching approaches (p. 252). Nonetheless, there is one more approach that is important to acknowledge, and that is game-informed learning (Abrams, 2021a(Abrams, , 2021bBegg, 2008;Begg et al., 2005;Bronack et al., 2006;Reinhardt & Sykes, 2014) wherein the game is not central to the activity; rather, active learning and participatory problem solving-elements of successful gaming-are essential parts of the activity even if a game is not. Game-informed learning Amidst the terminology wars that appear to be taking place-that is, researchers, including me, struggling to find the most precise term to describe what is taking place either on its own or nested within a larger classroom ecology-there is one additional construct important to acknowledge: game-informed learning. Unlike GBL, gamification, gameful and game-inspired learning, which all seem to place the game at the center of discussion, often with the "game as a host into which curricular content can be embedded" (Begg et al., 2005, p. 1), game-informed learning is about valuing the ethos of gaming and engaged learning. More specifically, Begg et al. (2005) explained that game-informed learning emphasizes "that educational processes themselves should be informed by the experience of gameplay" (p. 1). Bronack et al. (2006) aptly noted that game-informed learning involves "applying lessons learned from game play as a guide to existing educational processes" (p. 220), which underscores that game-informed learning involves "game and play principles applied in digital and non-digital contexts outside the confines of what one might typically consider a game" (Reinhardt & Sykes, 2014, p. 3). In other words, students do not need to be playing or designing games; rather, they can be engaged in cooperative or collaborative problem solving, goal setting, reflective practice, and strategizing-some of the many features often found in gaming-without there being a specific game or game design allocated to or associated with the particular classroom practice. Game-informed learning has similarities to game-oriented learning (Hanghøj et al., 2019) in that it "involves participants' active processes of imagining, enacting, and reflecting on particular courses of action and possible outcomes" (p. 1). The difference between the two is that game-informed learning does not involve a particular game or simulation, whereas game-oriented learning hinges on the use of games and "scenario-based education" (p. 1). This article targets a very specific activity-a cooperative testing situation informed by cooperative competition (i.e., a type of interaction related to play and helping opponents, also known as coopertition, Abrams, 2015Abrams, , 2017Abrams, , 2021aAbrams, , 2021b. Thus, the phrase game-informed learning is used to identify that, although gaming was not part of the cooperative assessment, it included behaviors and practices that also appear in or are informed by game play. GAME-INFORMED LEARNING AND COOPERATIVE COMPETITION Coopertition ® is the portmanteau of cooperation and competition, and it has been a foundational feature of the For Inspiration and Recognition of Science and Technology (FIRST) robotics organization. At the heart of coopertition is the interest in helping others, be they teammates or opponents, in the spirit of advancing healthy competition. For instance, audience members of a FIRST robotics competition, which involves a robot balancing on a seesaw-like platform, might see one team position its remote-controlled robot to help its opponent's robot onto the platform. Carefully shuffling their robots, the two opposing teams negotiate space so they both balance their robots on the seesaw together, simultaneously; both teams then are rewarded with points not only for achieving the goal, but also for helping each other (Abrams, 2015). This type of "gracious professionalism ® " is rooted in respect for one's self and for others (FIRST Values, 2017, para 2). In this study, there were no remote-controlled robots; however, the ethos of coopertition and the emphasis on assisting others was embedded in the in-class cooperative assessments (i.e., math tests high school students completed together in class, see Figure 1). These tests primarily were cooperative in nature and involved opportunities for students to engage in socially responsible behavior, which also was anchored in the classroom culture (Abrams, 2017(Abrams, , 2021a(Abrams, , 2021b. Relatedly, such interactivity hinged on students' movement around the room, their use of manipulatives, their communication with their partners and other classmates, and their ongoing reflective, trial-and-error practices (Abrams, 2017). Figure 1. An example of the cooperative set-up during a test students completed together Coopertition also has been examined in conjunction with GBL when the focus has included nondigital game play (Abrams, 2017). With or without the game as a central feature in the classroom, the type of iterative activity and reflection taking place is similar to that achieved through elements of gaming known as the feedback loop (Abrams & Gerber, 2013. With an understanding of the rules and objectives of a game, players look to forms of feedback (e.g., progress bars that show the number of lives remaining, in-game maps that show where one is in the game, and leaderboards that showcase achievements) to make decisions for game play. Thus, reflection is an important component of gaming (and, by extension, game-informed learning) even if one is not fully aware of such reflection beyond the game space, and players might need other scaffolds to help them apply reflection skills to other contexts and scenarios (Abrams & Gerber, 2021;Ke, 2008). Although aspects of coopertition can be used in conjunction with a specific game or game design (Abrams, 2017), for the research featured in this article, there is no game being played and no activities structured according to games. Rather, the ethos of the game vis-à-vis coopertition-helping others and benefitting as a result-is what informed the cooperative tests, and this article suggests that the game-informed practices (i.e., cooperative testing) supported the development of students' literacies and numeracies. ABOUT THE STUDY Since Fall, 2014, I have been engaged in a longitudinal study of gaming and learning in math classes in a public high school in the Northeastern United States. Over the course of the now eight-year study, I have continued to work with the same teacher-Mr. G (all names are pseudonyms)-to implement game-based and gameinformed activities to help students think expansively about math and about meaning making. I have observed hundreds of hours of classroom instruction and student interaction, engaged in formal and informal lesson and activity planning with Mr. G., conducted student interviews, and surveyed student feedback. The data informing this study stem from the 2016-2018 academic years. From 2016-2017, I received a research leave and went back to high school as both a participant observer and a collaborating educator. I visited the high school and Mr. G's class on a daily basis. Attending three tenth-grade geometry classes, and two eleventh-grade algebra classes (n = 96), I conducted over 400 hours of classroom observations, 11 individual interviews with eleventh grade students, and over 10 informal planning sessions and two formal interviews with Mr. G. During that time, the students also completed activity-related questionnaires, and they debriefed in wholeclass discussions, as well as in the online space, backchannelchat.com. Similar activity-related questionnaires and debriefings took place during the 2017-2018 academic year, when I observed approximately 90 hours of class instruction and worked with Mr. G's two tenth-grade geometry classes and two eleventh-grade algebra classes (n = 82). Since my research leave, I have engaged in over 300 additional observation hours, 10 student interviews, as well as over 20 formal and informal planning sessions with Mr. G. This article draws upon a particular aspect of the longitudinal study guided by the overarching question: How might a game-informed approach to testing shed light on how students develop their literacies and numeracies in light of challenge? Data were coded according to the four principle-related categories-discovery, reflexivity, contextual understanding, and sharing. This deductive approach to coding also was complemented by inductive coding, which enabled other codes to emerge in situ (Spindler & Spindler, 1987). The initial round of inductive coding included descriptors, such as "how: decision," "why: decision," "what: concept learned," followed by a second round identifying "reflexive thinking." In addition to whole-class debriefs and member checking opportunities, researcher field notes and student artifacts supported data triangulation, which, along with thick, rich description (Geertz, 1973) contributed to the depth of the qualitative inquiry. COOPERATIVE OPPORTUNITIES TO EXCEL Although students had been working in tandem to solve problems in Mr. G.'s class, in Winter, 2017, Mr. G. and I began discussing the inclusion of cooperative testing, or which soon became known as the cooperative opportunity to excel (or COTE, Abrams, 2021a). During the ideation phase, Mr. G. and I embraced coopertition principles, and we sought student feedback for the idea, which we piloted in Natalie's class. Natalie (whom I interviewed once each year for three years) was instrumental in helping us develop a system for students to experience a review-based cooperative testing scenario and rate their individual partnerships, data that ultimately led to the creation of COTE pairs for the first cooperative test. A second COTE took place in Spring, 2017. Coopertition was the inspiration for the COTE, and, even though neither gaming nor game design was involved, some students, such as Murdock, noticed that the COTE included behavior similar to gaming: "I'd say we been doing co-op gaming, basically. We were working together, sometimes doing the same thing to get to the same solution, sometimes just diverging a path and just seeing what we can do." As Murdock noticed, the purpose of the COTE was for students to work through mathematical problems and perplexities together. Thus, during both COTEs, students either faced each other or sat side-by-side and were encouraged to speak to their partners (see Figure 1). In fact, unlike traditional testing scenarios wherein students are to remain silent and talking is impermissible, a COTE hinged on student communication, something that was surprising for some students, including one who noted that he was shocked "That we were able to do it. Normally teachers don't support working on tests together." Not only did Mr. G. and I review with the students how to ask questions, an approach that became part of classroom practice in subsequent years, but also we circulated the room and reminded students to speak to one another, to discuss the answers together, and to help each other understand why (as opposed to what) solutions are possible. We emphasized that of importance was working together to solve a problem and not simply telling someone the answer. Audio data from each COTE confirmed researcher observation notes revealing that students' discussions remained focused on the COTE material (e.g., there were no tangential conversations) and included questions and responses about how to solve a problem (as opposed to simply stating the correct answer). One student said, "What did you get. Actually. How did you get it?" In other words, the student quickly realized that the purpose was to understand the answer, not just receive the answer, perhaps because the rules of the COTE specifically supported knowledge-sharing communication through how-based questions. FINDINGS In what follows are data from interviews and surveys, as well as whole-class debriefing notes, that provide insight into the how the game-informed COTE contributed to students' development of their literacies and numeracies-their meaning making beyond alphabetic and numeric texts, as evidenced in the students' learning-by-doing, strategizing, perspective-sharing, and application of newfound understandings. Although sharing is its own category, it is connected with each of the other three (discovery, reflexivity, and contextual understanding) because, just as "Active, Critical learning" (Gee, 2003, p. 207) is fundamental to all categories, given that the COTE included at least two students working together, the sharing category became a constant. Discovery and sharing Across post-COTE surveys and debriefing sessions, students noted ways in which the cooperative testing helped them to develop a better understanding of the material. When responding to the question, "What, if anything, surprised you the most about doing a COTE?" students noted that they engaged in trial-and-error learning and/or applied the concepts that either they co-discovered or that one classmate remembered. Some students embraced a think-tank format, which enabled them to have "Our ideas bounced off of one another [which] helped us to find the solution." Whereas brainstorming and informal idea sharing were part of some students' strategy, others, like Melissa, explained that her partner and she worked through the problems methodically: "We went over each step and evaluated the problem to find the solution together." Such a step-by-step approach required a degree of experimentation to find the solution; if the students were confused, then they needed to find out why. This is similar to what another group noted was their method: "We help[ed] each other with the formula and we both did the problem separately to see if we got the same answer." A classmate offered additional insight into such cooperative work, explaining that, during the COTE, "We can discuss possible answers and outline our steps/logic, our partners can help us find mistakes in our own work and explain why we got a particular answer." In this sense, discovery is supported by sharing and vice versa, also underscored by yet another approach-students offering each other guidance on how to solve a problem. One student explained, "If I didn't know anything he would help me with the formula and if he didn't know something I would help him," suggesting that, at times, the COTE included a type of reciprocal knowledge-sharing and the application of that newly discovered understanding. Although the COTE involved positive discoveries and sharing, there were some instances when more support was needed. For instance, one student reported, "We both didn't really know what to do so it was just spreading wrong information." Akin to two people playing an unfamiliar game that they cannot figure out on their own, this student and his partner did not seem to have the disciplinary knowledge to support the necessary examination of content and context; this point was evident in his follow-up suggestion for a future COTE to include "Doing it with information we all know." The importance of prior knowledge-and the ability to make inferences with prior knowledge-is something Ama called attention to when she said, "If we were both confused nobody helped anybody." Yet, these examples contrast with another group's experience wherein at least one group member made disciplinary inferences: "Both of us were not here the day of the lesson so I just used my prior knowledge of the topic." Discovery and sharing, in other words, can be effective but only if the students have the literacies and numeracies-from content knowledge to the value of problem solving to seeing how mathematical concepts exist in their world-to do so. As Jort explained about working with a partner, "I think they were able to help me with problems that I didn't understand and I wouldn't have been able to do that if I was working alone." Reflexivity and sharing Although discovery was an important component, so was thinking reflexively and honoring the perspectives of others. Natalie, who helped to design the COTE, explained how her partner helped her to become "unstuck": "When I was stuck on a problem, I didn't understand he helped by showing me different point of views." In this case, Natalie also needed to be open to hearing feedback and suggestions from her partner. Another student offered additional insight into the ways the COTE involved an openness to others' ideas: "Working together helped me because if me and my parent [sic] got a different answer we were able to work together and get the correct one by explaining it and coming to an agreement." Relatedly, during the COTE, there appeared to be an awareness of one's thinking and others' opinions. As was the case with Natalie, who learned from seeing "different points of view," another student noted that working with a partner during the COTE "made me think more about the question when I had another opinion helping me check over the problem." Likewise, other students spoke about how being aware of others' thinking led to revision because they worked "together…by asking questions and revising." In other words, the COTE supported students building on individual and shared knowledge; students valued others' perspectives ("different points of view"), embraced the act of revision ("asking questions and revising"), reflected on their work ("made me think more about the question"), and engaged in social relations ("working together…coming to an agreement"). In this way, there was no top-down teaching or one "right way" to solve the problem. Rather, students explored various routes to solutions, and the COTE supported students' socially situated literacies and numeracies. Contextual understanding and sharing The aforementioned examples showcase the ways in which students described their experiences during the COTE. When it came to contextual understanding, however, which also includes knowledge of information (e.g., symbols, tools, resources, and contexts), students specifically identified what was problematic and how they reached new understandings. Students, such as Keon, who was working on a geometry problem, articulated areas of confusion and uncertainty, as well as information that became salient after working with a partner. He explained, "I forgot to divide my answer by two because it was a triangle and [Matteo] helped me understand why I needed to." Similarly, another student identified that his geometric confusion was related to a formula: "I didn't realize that the formula for volume of cylinder was pi r squared height, I thought it was just pi r height." Here, the students used content area vocabulary to explain points of confusion; however, equally important is that the students identified where they were confused (as opposed to noting general confusion). As a result, students could get specific help from their classmates, an important aspect of the COTE and a practice evident when Weston, who said he "could not understand how to do a graphing problem," acknowledged that being receptive to "different perspectives" helped him to find the answer. In addition to noting the specific mathematical concept they recalled and/or understood, students acknowledged how cooperative testing helped them to understand better the material: "We were able to support one another and help each other figure out which part of the work was wrong, and how we were supposed to correct it, such as when one of us put 3 root 8 when it's supposed to be 3 root 2." Here, too, students were aware of their numeracies through a keen attention to what they did not know or where they were confused. For some students, these were minor blunders; one student noted that it was worthwhile "working together to eliminate silly mistakes." However, as Terrance noted, these "silly mistakes" still were critical to developing his mathematical knowledge and skillsets, and he saw the COTE as "working together to make us better" and "being able to completely understand some problems." It is not surprising, therefore, that most of the students not only completed all the questions on the test (as opposed to leaving some blank), but also, in the face of mathematical adversity, when they had the prior knowledge to do so, they worked through the problems together. ENVISIONING LITERACIES AND NUMERACIES IN THEIR LIVES Throughout their formal and informal discussions, students did not explicitly use the words, literacies and numeracies. However, their feedback suggests that they envisioned their meaning making with texts and with their peers as something that would be part of their future. Their understandings and insights can be conceptualized vis-à-vis the aforementioned categories: Discovery and sharing, reflexivity and sharing, and contextual understanding and sharing. Furthermore, although these categories are parsed in this section to support the discussion of the data, ultimately, each of these features works in concert with the others as part of the overall meaning-making experience. Discovery and sharing The game-informed cooperative assessment involved students playing with textnumbers, shapes, words-to explore, via trial-and-error, possible answer to the test questions. Although this required traditional literacy and numeracy practices (i.e., reading and writing alphanumeric texts), students also honed their literacies when they outlined their steps and brainstormed their ideas, and they developed their numeracies as they strategically explored solutions to their math problems. Additionally, students acknowledged that the type of thinking and behavior that were part of the cooperative assessment would be necessary for their future employment. One student even perceived the far-reaching implications of the type of knowledge sharing that occurred during the COTE, noting "In the real world when we get jobs we will always be available to work with other people." Another student explained how such interaction will be an essential component of his future career: I prefer the COTE because in the work setting, I will be utilizing the people around me to problem solve and to bounce ideas off of. This is good preparation on how to interact with other students in a more 'professional' setting since this is a necessary skill needed in almost every work setting such as a fireman, office worker, teacher, or policeman. In this way, students envisioned their game-informed experiences and their socially situated literacies and numeracies extending beyond school and into their lives (Baker et al., 2001Street et al., 2008). Furthermore, the game-informed COTE became a conduit to hone student discovery and sharing in a supportive way. Students reported the various strategies they embraced to complete the cooperative test and to approach the challenges together. Such cooperative work also has been known to mitigate anxiety and support student learning in and beyond the math classroom (Abrams, 2021a;Bahar-Özvariş et al., 2006;Zengin & Tatar, 2017). Reflexivity and sharing The game-informed COTE also included a reflexivity wherein students realized that math could be seen from "different points of view" and that problem solving involved "asking questions and revising." In the L1 classroom, students consider the points of view of their classmates and of literature-based characters, and it is helpful for students hone this skill elsewhere. In math class, such perspective-taking also supported an understanding that there can be multiple routes to a solution. In this way, the expansiveness of numeracies comes to the fore because students shared, perceived, and negotiated their own situated understandings of a problem. Furthermore, the data suggest that the students understood-either tacitly or implicitly-that their literacies and numeracies did not solely involve one prescribed "right" way of being. Rather meaning making is expansive, plural, and, in many ways, cooperative. Such noticings are important to life-long learning and to the L1 classroom. After all, being receptive to feedback and viewing learning as flexible-that there is a way to get "unstuck" as Natalie stated-is important to persevering through challenges, be they a test question or a reading or writing task. And, like Murdock said, the COTE had a co-op gaming feel in that there were opportunities to approach a problem as a team effort and to forge one's own path "just seeing what we can do." Such trial-and-error exploration not only involves reflexive thinking, but also an understanding of context. Contextual understanding and sharing During the COTE, coopertition also supported students' understanding of contexts, such as a shape or concept in relation to the overall problem or challenge. Students began to articulate that understanding how to reach that answer (e.g., "I forgot to divide my answer in two because it was a triangle") is important. These data suggest that, in a game-informed activity, such as a COTE, problem solving and recognizing possible routes to a solution became central to students' literacies and numeracies development. Whether students interpreted a graph or understood the application of a formula, students were using disciplinary vocabulary when explaining what they knew and what they learned during the COTE (e.g., "I didn't' realize that the formula for volume of [a] cylinder was pi r height"). Situating language and knowledge within a particular context or discipline is important. Take, for instance, the words, complement or complementary. In an L1 classroom, students might discuss how an author's use of imagery complemented the setting's description. In a math class, however, students might learn that complementary angles add to 90 degrees. Distinguishing the contextualized nature of meaning is essential to the development of disciplinary knowledge. During their debrief of the COTE, students used content-area language (e.g., "the formula for volume of a cylinder was pi r squared height") to explain points of confusion and clarification. And the students further honed their numeracies because the COTE supported the type of knowledge sharing often seen in gaming. During the COTE, this involved students offering each other support to persevere through challenges ("help each other figure out which part of the work was wrong, and how we were supposed to correct it"). Finally, we see students developing their literacies and numeracies by making meaning through socially situated understandings-be they relating a concept to out-of-school practices, such as co-op gaming, or to a real-world situation, such as future employment. APPLICATIONS AND LIMITATIONS This study of a game-informed assessment suggests that cooperative meaning making creates a space for students to share perspectives and to teach each other. A test, therefore, no longer represents a solitary endeavor or a summative evaluation; rather it transforms into a cooperative opportunity to excel that is formative in a nature because students learn-by-doing even during the exam. Although this study involves students in high school math classes, the game-informed ethos can be applied to L1 classrooms. For instance, cooperative discovery can transform other forms of group work. Students can work together to write an essay or to present an argument, or students can call upon each other for help when working through content-from vocabulary to grammar to literary works-that they find difficult to understand. Students also can take tests together in a similar fashion to those in Mr. G.'s class: They can sit side-by-side or across from one another and work through the questions together. Although beyond the scope of this manuscript, game-informed cooperative testing can help to offer students relief from the stress and anxiety that often accompanies traditional assessments (see Abrams, 2021a), and the same could be true for the L1 classroom. One limitation of this study is that the COTE (as presented in this article) involves a discrete testing space. However, the examination of coopertition-inspired work, which also extends beyond the scope of this article (e.g., Abrams, 2017), can inform ways that L1 educators support student-driven responses to material and to tasks that students find challenging. Another limitation is that the COTE took place in a class that embraced coopertition and game principles, and it is unclear how a COTE would be (or could be) implemented in a classroom that (a) focuses primarily on individual accomplishments, (b) often does not include group work, and/or (c) typically does not include students' reflective debriefs. FINAL THOUGHTS Coopertition is not about "giving" answers. Rather, it is about students identifying and communicating to each other what they understand and what they find challenging, and then working with their classmates to solve a problem and advance their individual and collective understandings. This approach runs contrary to traditional forms of assessment that value isolated learning and evaluate students individually, capturing data that represent a student's understanding-or perhaps how a student interprets a test question-during one discrete moment. A game-informed approach to testing, such as the COTE, not only underscores the importance of literacies and numeracies in and beyond the classroom, but also emphasizes how cooperative problem solving can help students think beyond themselves and work with others to achieve a common goal. As students reviewed the problems, acknowledged where challenges existed, and worked together to solve the mathematical problem, they were immersed in active, critical learning. In order to honor students' literacies and numeracies in the classroom-be it the L1 classroom or in the math classroom or any classroom, for that matter-it is important to create opportunities for students to work together to discover, to (re)think, and to reflect upon their understandings in ways that make sense to them and that help them to achieve new and renewed meaning(s). Although the inclusion of games can be part of this endeavor, games do not need to be a central focus. Rather, classroom activities and/or classroom culture can be informed by the ethos of game play even if a game is not present. This study creates spaces for additional explorations into nondigital game-informed pedagogy and practice that have the potential to transform and enhance experiences in and beyond the L1 classroom.
11,852.4
2022-07-13T00:00:00.000
[ "Education", "Mathematics" ]
Numerical Existence Property and Categories with an Internal Copy We define here a notion of internal copy and of weak internal copy of a category. We will then determine some families of categories having an internal copy or a weak internal copy. We will consider categories of definable classes of first-order theories and we will see that the notion of internal copy is related to the notion of numerical existence property. Introduction A category C can host internal algebraic structures such as monoids, groups, rings, etc. Among these internal algebraic structures there are categories, too. An internal category is defined as an internal graph having a composition arrow and an identity arrow making some diagrams (representing associativity of composition and properties of identities) commute (see e.g. [3]). An internal category cannot directly be compared with the category in which it lives. However, it can be "externalized" by means of global elements. It is hence natural to compare this externalization with C. In this paper we deal with the question whether there exist categories C having an internal copy, that is an internal category in C of which the externalization is isomorphic to C itself. We will show that one can produce examples of a weakening of this notion by considering some categories of definable classes of first-order theories. A metaproperty called numerical existence property will play a crucial role in this case. Finally, we will produce an example of a category with an internal copy in the strong sense. We can weaken the notion of internal copy by relativazing it to a doctrine over C. If p : C op → InfSL is a (primary) doctrine, that is a contravariant functor from C to the category of inf-semilattices, we can consider the internal categories in the base category Q p of its elementary quotient completion (see [5]). The objects of this category are pairs (I, ρ) where I is an object of C and ρ ∈ p(I × I) is a p-equivalence relation on it, that is with respect to the equivalence relation for which f and g are equivalent if and only if ρ ≤ p(f × g)(η). If C is finitely complete, then Q p has all finite products. In particular, a terminal object in Q p is given by the pair (1, ). Thus for every (I, ρ) in Q p , Among the doctrines over C there is in particular the subobject doctrine. We can hence give the following definition: Definition 2.4. Let C be a finitely complete category with a primary doctrine p over it such that Q p is finitely complete. A p-internal copy of C is a pair (Γ, I) consisting of an internal category Γ of Q p and an isomorphism I : Ext Qp (Γ) → C. A weak internal copy of a finitely complete category C is a Sub C -internal copy of C. 1 If the doctrine p is elementary (see [5]), then one can define a functor ∇ from C to Q p sending each object I to (I, ∃ ΔI ( )) (where ∃ ΔI is left adjoint to p(Δ I )) and sending an arrow f to [f ]. If p has comprehensive weak equalizers, then ∇ is full and faithful. In this case, as a consequence, if C has an internal copy, then it has also a p-internal copy. If C is regular, the subobject functor is an elementary doctrine having comprehensive weak equalizers (see [5]); hence every internal copy of C is also a weak internal copy. From the very definition of p-internal copies, it follows that no nontrivial finite category or preorder has a p-internal copy; moreover, obviously, no locally small non-small category has a p-internal copy. Categories of Definable Classes Let T be a first-order (intuitionistic or classical) theory with equality 2 . Let x, y, z be fixed distinct variables of the language of T. The category of its definable classes DC[T] is defined as follows: The following proposition shows some sufficient conditions which guarantee the category of definable classes of a first-order theory with equality to be finitely complete. The proof is omitted since it consists simply of a verification. Proposition 3.1. Let T be a first-order theory with equality. If τ is a formula with at most x as free variable such that In particular, if the language of T has a constant k, then DC[T] has a terminal object. 2. If π is a formula with at most x, y and z as free variables such that T ∃x π then, for every pair of definable classes {x| ϕ} and {x| ψ}, the following is a product diagram in DC[T]: As a consequence of the proposition above, the categories of definable classes of Peano arithmetics PA and of Heyting arithmetics HA are finitely complete. The same holds for the categories of definable classes of set theories like ZFC, ZF, IZF and CZF. In DC[PA] and in DC[HA] a terminal object is given by {x| x = 0}, while in set theories the definable class {x| ¬∃y(y ∈ x)} is terminal. Numerical Existence Property and Categories 387 For binary products, in HA and in PA one can take π to be (a formula representing) x = 2 y (2z + 1); in set theories, one can take π to be Definition 3.2. We will call a first-order theory with equality T having formulas τ and π satisfying the properties in items 1. and 2. of proposition 3.1 a cartesian first-order theory with equality. Whenever T is a cartesian first-order theory with equality, we assume that the structure of finitely complete category of DC[T ] is the one determined by the constructions in proposition 3.1. 4 First-Order Theories with Natural Numbers Definition 4.1. Let C be a cartesian category. A parametric natural numbers object is a triple (N, z, s) where N is an object, and z : 1 → N and s : N → N are arrows, such that for every pair of objects P, Q and every pair of arrows f : P → Q and g : Q → Q there exists a unique arrow h : P × N → Q making the following diagram commute: In a cartesian category with a parametric natural numbers object, every primitive recursive function between natural numbers can be represented. Indeed, as a consequence of the definition, for every f : N k → N and g : N k+2 → N there exists a unique arrow rec[f, g] : N k+1 → N making the following diagram commute. Succ(x, y). The theories of arithmetics PA and HA have natural numbers: Numerical Existence Properties Definition 5.1. A first-order theory with equality T having natural numbers has the numerical existence property (nEP) if, for every formula ϕ having at most x as free variable such that T ∃x (Nat(x) ∧ ϕ), there exists a natural (meta)number n such that T ϕ[n/x]. Numerical existence property nEP essentially means that if a natural number satisfying a property is proven to exist in T, then a natural (meta)number can be proven to satisfy that property in T. Peano arithmetic PA, Zermelo-Fraenkel set theory ZF and, in general, classical first-order theories with equality of numbers or sets (if consistent) do not have the numerical existence property. Indeed one can consider an independent sentence I (which exists by Gödel's first incompleteness theorem): clearly T ∃x((x = 0∧¬I)∨(x = 1∧I)) as a consequence of the law of excluded middle; however there cannot be a numeral n such that T (n = 0∧¬I)∨(n = 1 ∧ I), since in that case n would be 0 or 1 and we could hence prove ¬I or I in T. Heyting arithmetic HA has the numerical existence property: this was proven by means of realizability by Kleene (see [4]). CZF and IZF also have the numerical existence property, as it was proven by Rathjen in [6] and Beeson in [1], respectively. Internalizing Definable Classes Every cartesian first-order theory with equality T having natural numbers enjoys a primitive recursive Gödelian internal encoding of its syntax by means of natural numbers. We fix such an encoding. We also use, with abuse of notation, symbols for primitive recursive functions between natural numbers (including a primitive recursive bijective encoding of natural numbers p with primitive recursive projections p 1 and p 2 ), since they can be adequately represented in T. In particular, 1. Every variable ξ in the syntax of T is encoded by a numeral ξ; Vol. 14 (2020) Numerical Existence Property and Categories 389 2. We use the notation · for the encodings of connectives, quantifiers and equality as primitive recursive functions; 3. We use sub(x, y, z) to denote the code of the formula encoded by x in which the variable encoded by z is substituted by the variable encoded by y; 4. There is a predicate form(x) expressing the fact that x is the code of a formula of T; 5. There is a predicate free(y, x) expressing the fact that y is the code of a variable which is free in the formula encoded by x; 6. There is a predicate pf(u, x, y) expressing the fact that u is the code of a proof in T of the formula encoded by y from the assumption encoded by x; 7. There is a predicate notocc(x, y) expressing the fact that the variable encoded by x does not occur in the formula encoded by y. 8. We write der(x, y) as an abbreviation for ∃u (pf(u, x, y)). One can hence define some formulas which will be helpful in the following sections: 1. We define the formula dc(x) as which expresses the fact that x is the code of a formula of T having at most x as free variable. Here and in what follows we will use the formula notoccur to avoid problems with substitutions. An Example of Category with a Weak Internal Copy Exploiting the fact that in a category of definable classes DC[T] of a first-order theory with equality, every mono I → {x| ϕ} is isomorphic to one of the form ψ ∧ x = y : {x|ψ} → {x| ϕ}, one can prove the following proposition. and each arrow F to itself. The category DC q [T] is finitely complete and from the very definition of the abbreviations in the previous section, the following proposition holds. Proof. We can assume T to be consistent, since otherwise the thesis trivially holds. As a consequence of the corollary above, it is sufficient to prove that Ext Thus, by nEP there exists a natural (meta)number n such that T ∃xF [n/y]. Decoding n one can construct a formula ϕ n such that ϕ n is n. The definable class {x| ϕ n } is the object to which [F ] is sent. This application is well-defined, since if F and G represent the same arrow from 1 to (ΔΓ 0 [T], ≡ 0 ) in DC q [T], and n and m are natural (meta)numbers such that T ∃xF [n/y] and T ∃xG[m/y], then T ∃u pf(u, n, m) and T ∃u pf(u, m, n); hence using nEP we can find natural (meta)numbers k and h such that T pf(k, n, m) and T pf(h, m, n). Decoding k and h we obtain actual proofs of ϕ n T ϕ m and of ϕ m T ϕ n . 2. We use an analogous procedure to define the functor on arrows, exploiting nEP. These two functors determine an isomorphism of categories. The Classical Case In case of cartesian classical first-order theories with equality and natural numbers one can define some quotients in DC[T] using the minimum principle which holds for natural numbers:
2,751
2020-07-24T00:00:00.000
[ "Mathematics" ]
A Comprehensive Review of Optical Stretcher for Cell Mechanical Characterization at Single-Cell Level This paper presents a comprehensive review of the development of the optical stretcher, a powerful optofluidic device for single cell mechanical study by using optical force induced cell stretching. The different techniques and the different materials for the fabrication of the optical stretcher are first summarized. A short description of the optical-stretching mechanism is then given, highlighting the optical force calculation and the cell optical deformability characterization. Subsequently, the implementations of the optical stretcher in various cell-mechanics studies are shown on different types of cells. Afterwards, two new advancements on optical stretcher applications are also introduced: the active cell sorting based on cell mechanical characterization and the temperature effect on cell stretching measurement from laser-induced heating. Two examples of new functionalities developed with the optical stretcher are also included. Finally, the current major limitation and the future development possibilities are discussed. Introduction During the last decades, the technologies that were initially developed and carefully optimized for microelectronic device fabrications widely expanded into other scientific research fields. One of the most relevant results of this "contamination" between microelectronic fabrication technologies and other research fileds led to the creation and development of the first microfluidic devices and lab-on-chip (LoC) systems. After their appearance in the 1990s, LoC systems have grown rapidly to become a very hot topic, due to the inherent advantages they offer with respect to "standard approaches", including miniaturization, parallelization, integration, automation, as well as low consumption, high efficiency, rapid analysis, cost-effectiveness. Compared with conventional methods, LoC devices offer a great potential and they enabled new biomedical applications, ranging from drug discovery and delivery, to disease diagnosis and point of care (POC) devices. Since the size-scale of LoC internal structures is of the same order of most cells' size, these devices have been extensively used for cellular biology studies, and in particular to analyze the biophysics and biomechanics of single cells [1][2][3][4]. Cell mechanical properties are mainly determined by the cellular cytoskeleton, which is a complex network of filaments, microtubules and linkers. It has been proved by many researches that cell mechanical properties are directly related to the cell status [5][6][7], like cell proliferation, differentiation and pathology transformation, particularly related to cancer. As an example, several studies demonstrated that cellular neoplastic and malignant transformations are closely connected with significant changes in the cytoskeleton, which are in turn related to changes in the mechanical properties of the cell [8][9][10]. Many different methods and experimental techniques have currently been proposed to assess cellular mechanical properties, either in a quantitative or qualitative way. For example, Vaziri, Pachenari et al. [11,12] applied a negative pressure in the micropipette to create an "aspiration region" on the cell and studied the local membrane deformation at the contact area; Mathur, Mackay, Rouven Brückner et al. [13][14][15] determined the local cellular Young's modulus or the cell plasma membrane tension by using an AFM cantilever tip on the cells' surface and measuring the relative indentation depth at constant force; Dao et al. [16] and Chen et al. [17] exploited optical tweezers or magnetic tweezers, with microbeads attached to the cell membrane, to apply a very large force onto the cell surface, and they derived the cellular viscoelastic moduli from the cell deformation. Preira, Luo, Martinez Vazquez et al. [18][19][20] developed a microfluidic chips with small constriction channels and applied them to the analysis of cell migratory capabilities, allowing to study both active and passive cell mechanical properties. However, some of these techniques can only access and hence probe a small portion of the cell, and most of them need a direct physical-contact between the studied cell and the device, which could modify cell's natural behavior and even damage it during the measurement. Furthermore, these techniques often require quite complicated experimental preparations and they offer a relatively limited throughput. Recently, Otto, Mietke et al. [21,22] developed a purely hydrodynamic cell-stretching technique that allows increasing significantly the measurement throughput; this method is ideally suited when large populations of cells are analyzed, but it doesn't allow cell recovery for further studies. In contrast, the optical stretcher (OS in the following) proposed by Guck et al. [8] proved to be a very powerful tool for the study of cell mechanics: it is an optofluidic device combining the use of a microfluidic channel together with laser beams for optical stretching. The laser radiation applies a contact-less force on cell surface, causing a deformation that depends on cell mechanical properties. The use of a microfluidic integrated configuration allows achieving a high trapping (and analysis) efficiency of the cells flowing in the channel. Several studies already demonstrated that cell optical deformation measured from optical stretcher can be used as a mechanical marker to distinguish healthy, tumorigenic and metastatic cells, as well as to reveal the effects of drug treatments on the mechanical response of the cell [8,[23][24][25]. In this paper we give a comprehensive review of the OS, including different fabrication techniques and materials, working mechanism and different applications. In addition, several new developments and findings from recent studies are also described. Different Fabrication Techniques and Material Thanks to the great improvement of micromachining technology, LoC and microfluidic device performance significantly advanced during the last decade. In this section we review the different materials and techniques that were reported in the literature for OS fabrication. Basic Structure of an OS The basic structure of an OS is schematically illustrated in Figure 1 and it is based on a dual-beam laser trap in a microfluidic circuit. The microfluidic network is typically composed by a single channel (even if multiple-input and multiple-output structures can be realized) allowing the cell suspension to flow from an external reservoir (e.g., a vial) to the laser trap and then to the output, which can be a sterile vial, or even a simple water drop. In order to achieve the best performance, the cross section of the channel should be rectangular, to avoid "lensing effects" from the channel-fluid interface, and the surface roughness should be extremely low, to allow a high imaging quality and to reduce the laser beam distortions at the interface. The laser trap should be designed and realized so that two identical counter-propagating beams cross the microchannel, generally in the "lower half" of the channel so as to easily intercept the cells flowing in the channel, e.g., 25 µm above the floor as reported in [26] , where cells with a typical dimension ranging from 5 to 20 µm are considered. The height of the flowing cells can be slightly modified by tuning the flow speed. It was experimentally found that a good height to position the optical trap is between 20 and 40 µm from the channel floor since it prevents the cells from depositing on the floor, while keeping the cells flowing slowly. Furthermore, the two laser beams should be preferably aligned perpendicularly to the flow direction, and they should be symmetrically positioned with respect to channel axis. Different LoC systems and fabrication techniques were proposed in the literature to realize an OS, including semiconductors, polymers and glasses, each of them having specific properties and hence allowing to integrate different features in the final device [27]. As an example silicon allows for surface stability and thermal conductivity, but it is opaque, hence undesirable for imaging purposes. Polymers, which can be biocompatible and transparent, offer the advantages of low cost and availability of simple technologies for microchannel fabrication, even if their hydrophobicity and softness may be a problem for some applications. Glass, on the other hand has the advantage of being chemically inert, stable in time, hydrophilic, nonporous and it easily supports electro-osmotic flow. Moreover, when fused silica is considered, it possesses a very high optical transparency range, down to the UV, and very low background fluorescence, surface coating can be easily performed and optical waveguides can be integrated in the substrate.In the following, we summarize the different methods for OS fabrication. Conventional Discrete-Elements OS Similarly to many optofluidic devices, the OS in its first implementation [8,28] was realized using discrete optical and fluidic components: two optical fibers were simply faced to a flow chamber where the cell suspension was flown. These first prototypes suffered from vibrations and mechanical drifts, which affect the system alignment and lead to barely repeatable experiments. To solve these problems, a solution reported in the literature is to align the optical and fluidic components on a substrate, by exploiting lithographically fabricated grooves, and then to seal the system with a suitable cover to increase the device robustness [26]. The fabrication procedure of such assembled optical stretcher (AOS) is illustrated in Figure 2a. A glass substrate is patterned with an SU-8 photoresist structure using standard photolithographic techniques. This leads to a single rectangular region of constant height (typically 35 µm) having perpendicular gaps to align and hold the optical and fluidic components. In particular a square glass capillary is used to transport the cell suspension and two optical fibers, single mode at wavelengths >1 µm (Hi-1060, Corning, New York, NY, USA), are used to create the dual-beam optical trap. A thin slab of polydimethylsiloxane (PDMS) with a 1.5 mm hole is placed over the setup so that the trap region is centered within the hole. The hole is filled with index matching gel to reduce reflection of the laser beams. A glass coverslip is secured over the PDMS piece. Finally the PDMS layer is screwed into the microscope stage and the capillary is connected to the external tubing network. In Figure 2b, the finished system is placed over an inverted phase contrast microscope for cell imaging. Second-Generation Assembled OS Two new methods to improve the tolerances of the AOS fabrication have been recently proposed. The first one is based on the use of two asymmetrically-etched glass substrates to accommodate both the flow channel and the fibers [25], as shown in Figure 3. The volumes etched from the top glass-layer include the majority of the optical fiber and the entire flow channel; on the other side, the bottom layer includes a shallow groove, for the fibers only. Through careful choice of the chip layout, good fiber alignment and cell trapping position can be achieved, together with a significant robustness to the misalignment of the two glass pieces. In fact, as shown in Figure 3, even a big misalignment of the top glass layer does not affect the flow channel and the fiber position. A small laser distortion is expected due to the etched curved surface, which can be minimized by fine tuning the geometry of the etching layout. The second one is based on soft polymer material [30]. The proposed chip is fabricated by exploiting an innovative process, encompassing a double resist exposure and the use of Cyclin Olefin Copolymer (COC) TOPAS5013, which allows the fabrication of a multi-layer stamp shim. The two-level grooves for fiber positioning respect to the channel are obtained, as illustrated in Figure 4a. After the insertion of the optical fibers in the auto-aligning grooves they are glued and then the chip is sealed by thermal bonding of a TOPAS foil, see Figure 4b,c. An interesting feature of this approach is that different structure of the microfluidic channel can be easily obtained by changing the shim, and the easy chip fabrication method makes it ready for mass-production. Femtosecond Laser Fabricated Monolithic Optical Stretcher A completely different approach consists in the fabrication of a monolithic optical stretcher (MOS) by femtosecond laser micromachining (FLM) technology [29]. FLM holds many advantages over other fabrication techniques [31,32]: (i) it can be applied to different transparent materials; (ii) it enables rapid prototyping of devices; (iii) it is a 3D technique, allowing the fabrication of waveguides at different depths in the substrate; (iiii) it allows overcoming the complicated structure design and the assembly procedure. FLM has been extensively applied for a lot of optofluidic microchips in many studies. By the technique known as Femtosecond Laser Irradiation followed by Chemical Etching (FLICE) [33][34][35][36], the direct fabrication of microfluidic channels in fused silica can be obtained, which makes possible the integration of the microfluidic network and the optical waveguides by the laser radiation in the same substrate. The first example of a MOS realized by FLM was obtained by integrating optical waveguides into a commercial microfluidic chip produced by Translume Inc. (Michigan, MI, USA). The chip is based on a "3-layers technology" where a fused silica glass slide with a thickness of 250 µm is machined with the FLICE technique to obtain a through slot as the fluidic channel [29,37], see Figure 5. The machined layer is then placed in the middle of the structure and sealed by thermal bonding with two polished fused silica glass slides on both top and bottom surfaces . In particular, while the bottom layer is a simple, unmodified, slide, the top layer has two through-holes aligned with the middle-layer channel terminations, so as to form the input and output accesses of the microchannel. The subsequent fabrication of pairs of opposing waveguides orthogonal to the channel was also realized by femtosecond laser writing, which allows adjusting the waveguides "depth" with respect to the channel during the laser writing process. With such a fabrication technology, the channel cross-section has a perfect rectangular shape with optical quality for the top and bottom channel walls and very low surface roughness of 200 nm rms for the lateral walls, thus allowing for a good imaging quality and a high efficiency optical-trapping by the dual beam configuration. Additionally the opposing waveguides are also aligned with a very high precision, thus allowing to obtain a robust, portable and highly flexible monolithic OS, see Figure 5c. A different approach of monolithic OS, fully realized by FLICE in a single piece of silica glass, was demonstrated by Bellini et al. and Bragheri et al. [36,38]. Despite the significant advantages given by the possibility to realize in a single "writing procedure" both the microchannel and the optical waveguides, this method showed the significant drawback of producing microchannels with a high surface roughness, which can lead to a low image quality for the cell imaging. However, in a recent paper by Yang et al. [39] a new laser writing geometry allowing to decrease the surface roughness and to strongly improve the image quality was reported and further discussed in Section 5.1. Working Principle of the OS In this section we briefly review the basic physical principles underlying the mechanism of an OS, which is independently of its design, material and fabrication technique. In particular, we give a detailed description of the optical force distribution calculation, of the cell optical stretching procedure and of the method used to characterize cell deformation. Optical Forces for Cell Stretching As optical tweezers, also the OS exploits optical forces to both trap cells, but differently from tweezers, the scattering force in OS can be effectively used also to stretch the samples. Each photon from the laser beam carries a specific momentum, given by Equation (1), where h is the Planck constant; λ and υ the wavelength and frequency of the light; c 0 the light speed in vacuum; and n the refractive index of the traveling material. This momentum is modified both in modulus and direction when the photon crosses through the interface between the external medium and the cell. To correctly evaluate the force distribution applied on the cell surface it is important to take into account that both reflection and transmission occur at the interface between the cell and the suspension medium; thus a different momentum-change is experienced by reflected and transmitted photons. The ratio between reflected and transmitted photons is given by Fresnel equations, which are generally applied under the hypothesis of unpolarized radiation, and hence depends on the angle formed between the photons propagation direction and the cell surface, as well as by the refractive indices of the medium and the cell (with the general assumption that n cell > n medium ). Since, up to now, no method allows a direct measurement of the optical force distribution on the cell surface, some mathematical and computational models have been proposed to calculate it. One of the main problems is to obtain a proper determination of the force applied by the impinging laser radiation on each point of the surface. Starting from the seminal works by Ashkin et al. [40][41][42], it is possible to evaluate the overall optical force applied on a dielectric sphere by an impinging optical beam thanks to its decomposition into a series of optical rays which are assumed to compose the optical beam (standard ray optics-SRO). This ray optics approach was then adapted by Guck et al. [8,28] for the optical force distribution on the cell surface inside the optical stretcher. And his model is valid because normally the studied biological cell samples have a much bigger size (~10 µm) than the wavelength of the applied laser light, which is called large particle regime. This approach is however not sufficient to describe the full laser-particle interaction when loosely-focused (or collimated) beams are considered, as in that case the interaction region generally lies within the beam Rayleigh-range, and thus a different beam-decomposition technique (paraxial ray optics-PRO, [43]) has to be applied. Afterwards, some other studies improved this method by including the effects of multiple internal reflections of the laser light inside the cell [44][45][46]. In the following we briefly review the description of the PRO approach, so as to give the reader a better understanding of the beam decomposition and optical force calculation technique. The optical force calculation is basically performed as a two steps process: first an adequate decomposition of the optical beam as a series of optical rays in the region of interest (i.e., in the area occupied by the cell/particle) is calculated, and then the interaction of each optical ray with the cell/particle is calculated, thus allowing to evaluate also the overall beam effect on the sample. The first step, as shown an sketch in Figure 6a, is to decompose a non-focused Gaussian laser beam (as that generally emitted by optical fibers) into a distribution of individual rays, each characterized by a tproper direction and position in space, and carrying a certain amount of optical power. Considering the far field distribution through an angular spectrum decomposition technique, the power carried by a ray that intercepts the coordinate z at a distance ρ 0 from the axis of the beam is calculated as the integral of the intensity as a function of the radial coordinate ρ, in the portion of the annulus delimited by ∆ρ, see Figure 6a. The spatial phase gradient is then used to determine the propagation direction of each ray, which is perpendicular to the wavefronts. In particular, by adopting the paraxial approximation, and considering a Gaussian beam with waist ω 0 at z = 0, the optical field can be analytically described by two simple equations giving the electric-field amplitude (A), and radius of curvature (R) of the wavefronts respectively where z R (the so called Rayleigh range) and ω z (beam width as a function of propagation distance) are: Figure 6. Scheme of the optical field determination from a Gaussian laser beam with the paraxial ray optics approach. (a) The power carried by each ray, is calculated as the integral of the beam intensity, as a function of the radial coordinate within the area of the annulus associated to the ray; (b) The amplitude A(ρ, z) and the (c) curvature radius R(ρ, z) along the axis z are calculated, which will be exploited respectively for the evaluation of the power and the propagation direction of each ray. The amplitude of each ray is used to assess the optical power carried by each ray P, whereas the curvature radius is used to determine its propagation direction, thus obtaining a precise description of the beam properties in the area occupied by the cell/particle. Then, as a second step, the interaction of each single ray with the cell/particle is considered. The reference system used to evaluate the interaction between each ray and the particle is shown in Figure 7a. We assume for the calculations that the particle has refractive index (n p ) larger than that of the surrounding medium (n m ). According to the analysis already reported in the literature by Ashkin et al. and Brevik [42,47], it it possible to evaluate the force transferred by each single optical ray because of its interaction with the particle surface, due to the difference of the refractive indices, as it undergo multiple reflections and refraction on the boundary of the sphere, see Figure 7b. Coordinate system for a single ray interacting with a particle. (a) A single rays from a Gaussian laser beams hits on the surface of a particle. The locations of the laser beam and particle can be random. The incident plane is defined by the ray and the normal direction of the particle at the hitting point and is indicated by the gray area; (b) the single ray undergoes multiple reflections and refractions on the boundary of the particle. Taking the first ray-particle interaction as an example, the incident angle is defined by the geometries of both the Gaussian laser beam and the particle and is referred as θ. According to Snell's law and Fresnel equations, the refractive angle γ and the transmission coefficients (T) and reflection coefficients (R) can be calculated exactly. Then, the momentum carried by the photons in the incident, reflection and refraction beams can be calculated through Equation (1). We denote the momentum of the incident, transmitted and reflected rays by p i , p t and p r , and their directional unit vectors by a i , a t and a r respectively. According to the momentum conservation law, the stress σ applied on the local surface of the particle is expressed as the following equation [44]: where A is the area of the particle irradiated by the single ray, P is its optical power, c 0 is still the light speed in vacuum and Q is defined as a dimensionless momentum transfer vector. Furthermore, it is proved that the direction of this optical stress is always perpendicular to its acting surface and pointed away from the optically denser part [8,44], as shown in Figure 7b. In the same way, the optical stress from the following interaction of this single ray can be calculated and this can be easily adapted to other single rays by only changing the first incident angle and the associated power. The sum of the optical force from each ray results in a optical force distribution on the surface of the particle. As an example, Figure 8 shows the calculated optical force distribution on a particle under different conditions. The applied Gaussian laser beam has a wavelength of 1.07 µm, a beam waist of 3.1 µm and carries an optical power of 10 mW. The particle, which is considered to have a diameter of 5 µm and to lay 30 µm away from the beam waist position, exactly along the laser beam axis, has refractive index of 1.37, similarly to real cells, and the surrounding medium has refractive index of 1.33 (corresponding to that of water). The optical stress distribution produced on the particle surface by a single laser beam shining from left to right and from right to left is shown in Figure 8a,b respectively. The resulted optical stress profiles are rotationally symmetric with respect to the beam axis and there is net force from a single laser beam pushing the particle away from the laser source. It is interesting to notice that the "spikes" which are observed in the stress profile are produced by the second order internal laser-particle interaction [44], and they tend to become more relevant as the particle diameter becomes smaller with respect to the beam one. When the two counter-propagating beams are simultaneously impinging on the particle, see Figure 8c, the net applied force is zero, and the particle is stably trapped in the center. However, the local optical force acting on the surface is not zero and it significantly increases with respect to the single laser case [8]. This situation is that commonly exploited to create an OS, by simply increasing the laser beams power. In Figure 8d-f we show the optical stress distributions of different particle size (5 µm, 10 µm, 15 µm) under the same situation of Figure 8c. Figure 8d is the same as Figure 8c and it is shown only to help the comparison. By comparing the three panels of the second line, it can be immediately seen that as the particle size is increased the optical force becomes more concentrated along the laser beam axis, and the stress distribution-shape slightly changes. Figure 8g-i show the optical stress distribution on a 5 µm particle when the distance of the particle from the beams waist is set to 30, 40 and 50 µm, respectively. By increasing the distance, the optical stress distribution slightly changes, and the overall stress is decreased, as can be intuitively understood noticing that the overall intercepted power is reduced as the beam broadens during propagation because of diffraction. The Gaussian laser beam has beam waist of 3.1 µm and wavelength of 1.07 µm and carries optical power of 10 mW. The refractive index of the particle is 1.37 and medium 1.33. The distance between the particle center and the beam waist (either one laser or two lasers) is indicated in each panel together with the diameter of the particle. (a-c) show the optical stress from left side laser radiation, right side laser radiation and both side laser radiation respectively; From (d) to (i), the two-side laser irradiation is considered; (d-f) show the optical stress distribution change with particle size increase; (g-i) show the optical stress distribution change with distance increase. Cell Stretching Procedure Similar procedures for cell stretching measurements are reported in the literature [8,23,24,29] and the work-flow can be graphically depicted in Figure 9. First, the cell sample is injected by external pump system into the microfluidic circuit of the OS and low laser power is turned on for the two opposing laser beams. The cell flow is then adjusted as fast as possible, provided that the laminar flow is maintained and both a good imaging of flowing cells (which depends on the camera frame rate) and an efficient trapping (which depends on the cell velocity with respect to the laser power) are assured. Once a single cell reaches the laser irradiated zone and is trapped by the optical force, the flux is stopped, so that no additional cells reach the "trapping area". After this, the optical power output by the two waveguides is increased to the preset "stretching power" value with a step-like power profile and the cell progressive deformation is recorded by a camera. After a certain time interval (typically about 5 s), the laser power is switched back to the low "trapping" value, and the stretched cell is observed while it partially recovers the original shape. During the stretching and recovery processes, the images are recorded and saved at a high frame rate, to allow for subsequent image analysis. Finally, the studied cell is released, by switching off the laser beams and restarting the flux, and the whole procedure is then repeated for other single cells (additional details can be found in [8,23,29]). Characterization of Cell Optical Deformation In order to characterize the cell optical deformation from stretching measurements, the previously stored images are analyzed. There are different methods reported in the literature [8,26,48]. As an example, here we show the method, exploited by Guck, Lincoln, et al. [8,26], based on the image polar transformation and edge detection algorithms. The procedure is schematically represented by the different steps shown in Figure 10. First of all, each image is transformed into a polar coordinates system, by exploiting an automatic technique for cell-center identification. The radius limit of the polar coordinate is determined by the original rectangular image border, see the blue circle in Figure 10a, which is also present in Figure 10b as a straight line after polar transformation. Accordingly, the bottom part of Figure 10b is the outside of the cell. The intensity of the gray scale image for each angular direction (i.e., each vertical line in panel (b)) is extracted (see Figure 10c showing the intensity profile corresponding to the green line in Figure 10b) and smoothed by a low-pass Fourier filter to remove the small spikes. Then, the derivative of the intensity values along the radius is calculated and the minimum point (corresponding to the fastest transition from bright to dark region around the cell border) is recorded, as indicated by the red circle in Figure 10d. For each angle, the same process is applied and the cell border at each polar angle is obtained, see Figure 10e. Then this cell border curve in polar coordinates is transformed back and the cell contour is plotted upon the original phase contrast image as shown in Figure 10f. For the edge detection, some other signal processing methods can be additionally implemented to optimize the detection result, like squaring the intensity profile to have a stronger contrast or defining a threshold to simplify the global minimum search as demonstrated by Guck et al. [8]. With this method, a very high resolution of 100 nm can be obtained, which can satisfy the purpose of cell border recognition. The image analysis procedure is applied to all the images from the cell optical stretching measurement and the cell contour is found in every image frame. Afterwards, the cell optical deformation in terms of elongation along the laser beam axis and contraction in the perpendicular direction can be derived from these cell contours. As an example, Figure 11 shows the deformation profile of a single cell (MCF7) under 5 s optical stretching and further 5 s recovery. For the stretching, a typical creep compliance curve of cell size variation can be observed. The cell deformation is reported as the absolute cell size value. The cell is increasingly elongated (X elongation ) along the laser beam axis and contracted (Y contraction ) in the perpendicular direction under stretching duration. After stretching, the cell can partially recover its shape. Figure 11c,d show the microscope images of the original cell trapped at the very beginning of the stretching and maximally stretched after 5 s of high-power irradiation. To explain this time dependent and compliance behavior of the cell deformation in response to a constant step stress from the optical stretching measurement, Wottawah et al. [49,50] applied constitutive equations and, by fitting the experimental result with the theoretical prediction, the viscoelastic parameters of the cell, like Young's modulus or shear modulus, can be derived. Some other studies proposed more complicated mechanical 3D models to mimic the real complex structure of a single cell and study the different contribution of cellular mechanical structure. Ananthakrishnan et al. [51,52] developed two structural models: a thick shell model for the actin cortex and a three-layered model for the whole cell, and they found that the outer actin cortex mainly determines the structural response of the cell during cell stretching. Another interesting result was obtained by Gladilin et al. [53], which created a three-component model (including the nucleus, the cortical actin filament and the perinuclear vimentin intermediate filament) allowing them to isolate the contribution of each component, thanks to the use of specific drug treatments. Besides, the cell maximum relative deformation (i.e., strain ) is also often used in literature as a simple reference parameter for cell mechanical property comparison. It can be either the relative elongation (Equation (7)) or the relative eccentricity variation (Equation (8)). In both equations, there is a correction term "corr" obtained by numerical simulations, which accounts for the variations of the optically induced stress profile from different cell size and the refractive index of the different cells as described in reference [8,23]. OS as a Tool to Analyze Cell Lines, Drug Treatments and Cellular Organelles Since its invention, the OS clearly demonstrated its ability to test cell mechanical properties, and during the last decade, it has also been extensively and successfully applied to study different cell samples, the effect of various drug treatments on cell mechanics and internal cellular organelles. Optical Stretching of Red Blood Cells and Lipid Vesicles The first application of the OS to the study of single cell mechanics is related to the study of red blood cells (RBCs) [8]. The simple structure of RBCs (they do not have any organelles or internal structure), and the possibility to make them almost perfectly spherical by swelling in a hypo-tonic suspensions makes them to be the ideal candidate for preliminary analysis, as they can be effectively modeled as homogeneous spheres, with an isotropic index of refraction, thus simplifying the analysis of cell mechanical response. In Figure 12a, it is shown the microscope images of single RBC stretched at gradually increased laser power levels [45]. The cell is elongated along the laser beam axis and contracted in the perpendicular direction. Given the optical field of the stretching laser (magnitude and direction), the RBC size and its position in the optical field, the refractive indices of the cell and the buffer medium, the optical force applied on the cell surface can be precisely evaluated, as discussed in Section 3.1. The RBCs' relative deformation is represented in Figure 12b and it can be seen that the deformation of RBCs maintain a linear response with respect to the optical power until 150 mW. Within this linear regime, linear membrane theory can be used to describe the deformation of RBCs [8,45] (see the curve in the figure) and a value of Young's modulus about Eh = (20 ± 2)µNm −1 was derived by Bareil et al. Other studies showed the analysis of phospholipids vesicles (synthetic structures with a spheroidal lipid-bilaryer shape) by exploiting the OS measurement technique [50,61,62]. Similar to RBCs, they can provide a simple mechanical model for studying membrane mechanics, which is a very important aspect of the cellular function. In Figure 13, an example of a vesicle trapped at low laser power and then stretched at high power is reported. The time resolved deformation of vesicles under different laser power is plotted in Figure 13c. It is clear that vesicles have a fast response to the applied optical stimulus as they can reach their final deformation immediately after the stretching started and recover the deformation and return to the initial shape very rapidly after the stretching stopped. Interestingly, by stretching vesicle, Solmaz et al. [61] were able to extract the bending modulus of the lipid bilayer, while Delabre et al. [62] proposed a vescicle model based on a quasi-spherical approximation allowing also to take into account the laser heating effect on the vescicle deformation. Optical Stretching of Eukaryotic Cells and Drug Treatment Effect Differently from erythrocytes, eukaryotic cells have a much more sophisticated structure including various internal organelles, nucleus and several immersed polymer networks. In particular, these polymer networks, which consist mainly of three different filamentous proteins: actin filaments, microtubules, and intermediate filaments, form the cellular cytoskeleton and provide the basic mechanical support for the whole cell in terms of mechanical strength and morphology [23,50]. Guck et al. [8] first exploited the optical stretcher to analyze BALB 3T3 fibroblast cells and showed that even if a very high laser power was applied, the cell showed a very small deformation. Because of its internal structure, BALB 3T3 cells behaved like a relatively hard sphere. Similar behaviours are obtained for different populations of eukaryotic cells. Figure 14 shows an example of a single MCF7 cell trapped at low power of 25 mW per side and stretched at much higher power of 650 mW per side; even in this case only a small deformation can be seen. After the successful application of the optical stretcher to eukaryotic cells, Guck et al. [23,26] exploited it to evaluate variations in the mechanical properties of cell lines at different evolution stages, e.g., the healthy cells with respect to the tumorigenic ones and even the metastatic ones. The effects of drug treatments on cell mechanical property have been also investigated. In particular, a well characterized cell line of human breast epithelial cells and their cancerous counterparts were considered by Guck and coworkers: MCF10, MCF7 and MDA-MB-231, which are respectively normal, cancerous and highly metastatic cells. In Figure 15, the optical deformability of the three cell lines is reported. The results showed that the curves of the cells' optical deformability can be fitted with a normal distribution and the obtained optical deformability values are: 10.5% ± 0.8% for MCF10, 21.4% ± 1.1% for MCF7 and 33.7% ± 1.4% for MDA-MB-231. With few cells measured, their optical deformability can be surprisingly distinguishable in a statistic way. The differences in the optical deformability of the three cell lines can be directly related with their different metastatic potentials. Besides, they applied trug treatments to the two cell lines of MCF7 and MDA-MB-231 and their results are also included in Figure 15. For MCF7 cell sample, the phorbol ester TPA was applied, which can dramatically increase MCF7 cell invasiveness and its metastatic potential; it can be observed in Figure 15a that the optical deformability is increased with this drug treatment (modMCF7) highlighting higher optical deformability corresponds to the higher metastatic potential. While, MDA-MB-231 cells treated with alltrans retinoic acid, became less aggressive and its optical deformability was decreased, see (modMDA-MB-231) in Figure 15b, which showed the reverse trend of metastatic competence with cell optical deformability. Other cell lines have been also evaluated, Lautenschläger et al. [63] studied acute promyelocytic leukemia (APL) cells with optical stretcher and revealed a significant softening during differentiation. Furthermore, they exposed the cell to paclitaxel and found out that this treatment does not alter cells' compliance from optical stretching but reduces cell relaxation after the optical stress is removed. Schulze et al. [64] exploited the optical stretcher on human skin fibroblast cells and showed that an increase in age was clearly accompanied by a cell stiffening. Ekpenyong et al. [9] evaluated the differences in cell mechanical properties during differentiation of human myeloid precursor cells into three different lineages and observed that a reduction in steady-state viscosity is a physiological adaptation for enhanced migration through tissues. Active Mechanical Sorting Based on Cellular Optical Deformability As previously discussed, cellular optical deformability, measured by OS, provides a simple marker for cells analysis, allowing to distinguish healthy, tumorigenic and metastatic cells, as well as to study the cell mechanical response from different drug treatments [8,23,63]. Starting from this evidence, the possibility to use cells' optical deformability as a criterion for single-cell sorting was recently demonstrated, by two separate studies [24,25], thus opening the way to biological analysis requiring the selection and recovery of those cells that exhibit specific mechanical properties. The concept behind this result is quite simple: thanks to a real-time analysis of cell stretching (which can be realized by an automated computer process), the trapped and stretched cells are immediately recognized as "interesting or not", and the presence of two separate outputs in the microfluidic structure allows sorting the trapped cell according to the stretching measurement result, without requiring any additional marker, like fluorescent stain [65], and without needing a large cell population [66][67][68][69]. Chip Layout for Cell Stretching and Sorting Cell sorting naturally requires more than one output, the microfluidic design has to be obviously different with respect to that of a standard OS, and also the optical section requires some modification. As an example, the optofluidic microchip proposed by Yang et al. [24] for active cell sorting on the basis of optical deformability is shown Figure 16 . This chip is realized in a very small piece of silica glass by the FLICE technique, as described in Section 2.4 and its size is 2 mm (thickness) × 1.5 mm (width) × 4 mm (length). The microfluidic network design is almost the same of an optical sorter [65] and consists of an X-shaped microfluidic circuit, including two inlets, two outlets and a common central channel, see Figure 16a. The experiment setup is schematically shown in Figure 16c. After injecting cell suspension (from inlet 1) and buffer medium (from inlet 2), a laminar flow regime is built up in the central channel, allowing to keep the flowing cells in their own flow stream up to the chip outlet (port 3). Two fiber-to-fiber U benches, inserted along the optical section, provide the possibility of performing both cell stretching measurement and subsequent sorting procedure by optical forces with the same optical waveguides, by simply reducing the intensity of one beam (e.g., putting an attenuator in the corresponding U bench) and thus unbalancing the optical power levels between the two waveguides, so that the stronger beam will gently push the cell towards the desired stream flow, thus achieving cell sorting. When a cell is ready to be tested, a customized LabVIEW program is applied for the real-time analysis of optical stretching measurement; the obtained cell-deformation value is then automatically compared with a user-defined threshold, and the cell will be sorted accordingly, either to the "waste outlet" (port 3) or to the "selected cells" one (port 4). This procedure is continuously performed until the desired number of cells is sorted. Meanwhile, another study from Faigle et al. [25] presented similar design but with different fabrication technique. Both the internal microfluidic channel and the optical fiber slots are created in separate glass substrate by chemical etching. The chip assembling, especially for fiber insertion and alignment, is improved by optimizing the etching and bonding geometry. The direct usage of optical fibers provides a high efficient optical power delivery from the laser source to the active stretching area and also the thin glass slide as the chip bottom layer guarantees a good imaging quality. Besides, the possibility of having more complex channel geometries can be obtained with this method. The microchips generally realized by the FLICE technique have the big advantage of being monolithic and compact, however, the internal surface of the microfluidic channel may be quite rough, thus causing imaging distortion and hindering imaging quality, which is fundamental for an appropriate cell contour recognition and for a correct deformation evaluation. This issue was recently solved by exploiting a new laser irradiation geometry [24], as shown in Figure 17. The main idea is to write the microchannel structure "along" the beam propagation direction, and not perpendicularly to it, as in previous fabrications. This, apparently small, change in the fabrication procedure allows defining all the microchannel surfaces in a significantly smoother way, thus strongly decreasing the internal surface roughness and greatly improving the imaging quality, as evident in Figure 17. The physical reason underlying this surface quality improvement was suggested to be connected to the highly ellipsoidal shape of the writing voxel: exploiting the newly proposed approach, all the microchannel surfaces are produced by the "longer side" of the voxel, while the "shorter and sharper" part of it has almost no impact in the structure determination, because of the voxel-translation direction. The physical reason underlying this surface quality improvement was suggested to be connected to the highly ellipsoidal shape of the writing voxel. By using the "longer side" of the voxel and keeping the same separation between the different "writing tracks", a larger overlap and a better uniformity of laser irradiation are therefore obtained leading to a smaller roughness after chemical etching. Cell Sorting Efficiency Discussion With the new proposed microchip, Yang et al. performed the cell sorting experiments with metastatic (A375P) and highly metastatic (A375MC2) human melanoma cells, two cellular lines with a very similar cell size distribution (17 ± 2 µm in diameter, see Figure 18 ) and a slightly different optical deformability (8.4% ± 1.1% for A375P and 10.1% ± 1.8% for A375MC2 using two optical beams of 1.2 W each), thus reflecting their different mechanical properties and offering an intrinsic cell marker to separate them. As discussed in Section 4.2, the higher optical deformability of A375MC2 is directly related with its higher metastatic potential. In addition, it should be noted that the distributions of the cell size and optical deformability from these two cell samples follows very well a normal distribution, see the fitting Gaussian curves in Figure 18. The cell sorting experiment was carried using a 1:1 cell mixture of A375P and A375MC2 under the same concentration (so that the same number of A375P and A375MC2 cells is present in the final suspension), thus allowing to estimate that the overall deformation distribution could be considered as the sum of the two Gaussian curves reported in Figure 18 . After each cell stretching-measurement, the produced deformation was compared with a preset threshold value, and if the measured deformation was higher than the threshold, the cell was sorted into the collection branch of outlet 4; otherwise, the cell was addressed to the waste part of outlet 3, see Figure 16. In this way, an enriched sub-population of highly metastatic cell A375MC2 can be obtained, as shown an example in Figure 18c, by selecting cell with optical deformation larger than 11%, more A375MC2 cells (blue color pattern) will be selected than A375P (red color pattern). . The whole area under each cell curve is set equal representing the same concentration. By defining a deformation threshold, a sub-population of A375MC2 can be enriched by collecting cells with higher deformability; (d) The ratio of A375MC2 in the collected cell sample and the ratio of cells in the initial sample that are expected to exhibit deformability higher than the threshold (acceptance rate) versus the defined threshold value. Figure reproduced from Reference [24] with permission from Royal Society of Chemistry. In order to check the efficiency of this technique, it was necessary to have a method to calculate the percentage of A374P and A375MC2 cells in the collected cell sample, and this was achieved by pre-staining the A375MC2 cells with a fluorescent dye (LDS 751). By selecting different threshold value, this cell sorting experiment was repeated and in each collected cell sample, the percentage of A375MC was calculated by simply counting the ratio of fluorescent and non-fluorescent cells. The experimental results well matched with the theoretically expected values, but a deviation form the theoretical values was obtained when a high threshold value (e.g., 11%) was used. This is connected to the fact that by increasing the threshold, the acceptance rate (i.e., the number of all collected cells divided by the number of all stretched cells) is reduced, thus making the measurement longer, and giving cells the possibility to deposit on the channel or to cluster, causing undesired perturbations in the system. Similar results were also observed in the study of Faigle et al. [25] suggesting that the active cell sorting based on cell mechanical properties can become a reliable and useful technique for the selection of specific sub-populations of very few cells. Optical Heating and Temperature Effect For laser application in cell biology, heating due to the absorption of optical radiation is an important issue that should be addressed. Indeed the possible thermal damage affect both the vitality of the samples and the validity of the results. Optical Heating and Temperature Measurement In an optical tweezer, tightly focused light beams are used, causing extremely high values of light intensity (in the order of magnitude of few MW/cm 2 ) to be produced in the beam focus, thus producing a significant increase of the local temperature. In particular, Peterman et al. [70] measured the temperature at the focus of the optical tweezer increased by 34.2 ± 0.1 K/W with 1064 nm laser for polystyrene beads of 2.2 µm diameter in glycerol medium. On the other side, OS is based on a completely different trapping configuration, obtained through two counter-propagating non-focused laser beams, as shown in Figure 1 and efficient cell trapping is achieved even using a low optical intensity from each fiber (on the order of a few mW over a large area). However, when cells-stretching measurements are conducted as described in Section 3.2, the sample flow is stopped, reducing the heat diffusion, and the trapping laser power is increased in the range between 0.5 and 1.5 W for each side, which can induce a non-negligible temperature increase. In order to monitor the temperature change during cellular optical stretching, a precise measurement of the temperature value during the whole process should be performed. Moreover, the measurement should be done directly inside the microfluidic channel in the region illuminated by the laser radiation. However, the geometry of the micro-chip and its small dimension (100-300 µm for the internal channel) prevents the exploitation of conventional techniques as thermal sensor to perform the measurement. Additionally, it is difficult to have the spatially resolved temperature profile from the macroscopic sensor because it can only deliver an area-averaged results. All these issues were successfully overcome by a newly developed method called fluorescence ratio thermometry [71], in which laser-induced fluorescence (LIF) of two dyes, Rhodamine B and Rhodamine 110, is employed as a temperature indicator. The first dye, Rhodamine B, has a well characterized temperature dependent LIF and also high temperature sensitivity (2.3% K −1 ), while the second dye, Rhodamine 110, has temperature independent fluorescence and is therefore used as a reference. The temperature is dependent on the fluorescence intensity ratio of Rhodamine B and Rhodamine 110 and the combination usage of these two dyes can avoid the problems due to the local fluctuations from the excitation light intensity or from dye concentration. The other advantage of this method is that no normalization is required and absolute temperature can be directly measured with a resolution of 2°C. The spatial temperature profile is obtained by measuring the fluorescence intensity ratio across the channel section at the central plane of the optical trap with a laser scanning confocal microscope with spatial resolution of less than 0.5 µm. Results of the temperature distribution are shown in Figure 19a,b for a total power of 2 W (1 W per side) in the trap, highlighting a temperature rise of 25°C over a background temperature of 21°C. It should be noted that the temperature spatial distribution is obtained after the temperature equilibrium is established and by averaging multiple successive scan. The same temperature measurements were repeated under different laser power values from 0.5 to 1.25 W per side and the results show a linear temperature dependence on the applied laser power, with a temperature increase rate of about 13°C/W in the laser trap center. The same method was also applied to evaluate the temperature variation as a function of time [71][72][73], which is an important parameter, as during the stretching measurement the trapping power is abruptly increased for some seconds and is then lowered to the original trapping value, see Figure 20. In order to have a fast response and sufficient time resolution, the averaging procedure is removed and Figure 20 shows the temperature evolution for the step-like laser power increase. It can be seen that after the laser power is changed (either increased or decreased), the temperature equilibrium is simultaneously reached within fractions of a second. Wetzel et al. [72] also found the temperature immediately decreased to the equilibrium value after the laser is turned off. The laser is turned on at t = 2 s and has a total power of 2 W. Figure reproduced from Reference [71] with permission from Optical Society of America. All the temperature measurements in space and time are performed when the microfluidic flux is stopped as maintaining a medium flow would easily remove the heat from the laser irradiated region. Additionally, it should be taken into account that the temperature increase from laser heating is strongly dependent on the laser light wavelength and the cell solution medium. Another interesting effect produced by the presence of a temperature gradient is the creation of the so called Marangoni-effect, due to a surface-tension gradient because of the temperature profile. About this point we point out that the temperature gradient is non negligible only in the direction perpendicular to the beam axis, thus allowing us to neglect its impact on the elongation in the optical beams direction. Additionally, as the cell is trapped in the position corresponding to the maximum temperature, no net force is acting on the cell, and thus no displacement from the trapping position is observed. Optical Heating Impact on Cell's Viability As the OS is essentially used for single cell mechanics measurements, it is mandatory to investigate the optically induced heating impact on cell's viability and properties. In particular, it must be taken into account that the cytoskeleton may be altered by cell over-stretching as well as by heating, which would result into an unaccurate mechanical characterization. Differently from the conventional research done in this field, where small temperature increases (5-10 K) and longer time spans (minutes or hours) are applied [74], during optical stretching cells experience a high temperature increase in a short time duration as discussed in Section 6.1. Standard proliferation and epigenetic analysis are the most accurate ways to verify cellular viability. However, they normally require large number of cells and several days to fulfill the assay, which makes it not applicable if the viability of each studied cell needs to be evaluated as soon as the measurement is performed. A simple approach has been proposed in the literature [8], which suggests to observe the cell appearance through phase contrast microscopy because dead cells usually show less contrast and no clear cellular contour. Despite its easiness, this method is not fully reliable being affected by man judgment. A more careful and accurate method is the use of the vital stain: Trypan Blue allows distinguishing viable cells because the dye can penetrate only the membrane of dead or damaged cells, which can not maintain their normal functionality, while it is excluded by viable ones. Even if this method allows for single cell evaluation, it can not be applied continuously after each cell stretching measurement because cells need to be removed from the microchannel for staining and checking. A method with on-site test capability is therefore needed and can be found in a different vital stain method with calcein acetoxymethylester (AM). This dye is membrane-permeable and, more importantly, is hydrolyzed by endogenous esterases into the green fluorescent calcein, which in turn only retains in the cytoplasm of living cells [75]. By measuring and comparing the green fluorescence intensity of each cell before and after optical stretching measurement, cell viability can thus be determined. However, photo-bleaching of 10%-20% from each time imaging extremely decreases its sensitivity. A new method, even if with a very low throughput of less than 5 cells/h, based on cell spreading has been proposed specifically for this purpose and successfully demonstrated in literature [72]. After cell stretching measurement, the flux is not activated and the laser is completely turned off so as to let the studied cell slowly sink to the floor. After about 10 min, live cell could spread on the floor and attach to it. Since cell spreading is a vital feature of live cells, cell viability can be determined by observing cell spreading and attachment on the channel. This behavior of cell spreading involves the reassemble and rearrangement of the cell cytoskeleton and plasma and can be thus recognized as an effective indicator for cell viability. On the contrary, malignant cells show a very weak or absent spreading. By counting cells that show spreading ability, the cell viability can be easily obtained. Figure 21 shows the results of cell viability check after optical stretching in two different situations: in one case the temperature is increased by changing the laser power and in the other case by extending the stretching duration at the same laser power. It is found that more than 60% of cells can survive shorter laser heating of 0.5 s up to 58 ± 2°C or can resist longer laser heating of 5 s at 48 ± 2°C. Temperature Effect on Cell Mechanical Property High laser powers are needed in order to induce an appreciable cell deformation during optical stretcher measurements. Although the beams are not focused and the cells are not directly damaged by the laser radiation, they might suffer from heating due to the high laser power. Moreover, in cell optical stretching measurement, the deformation could be produced by both optical force and optical heating. In order to evaluate their contributions and better understand the temperature effect on cell mechanical property, a series of studies have been presented [39,[76][77][78][79], which include variations to the standard optical stretcher to monitor and modify the temperature on the cells. In Figure 22 two examples are reported where the similar idea of realizing an active temperature control is shown. The first one exploits two additional fibers, positioned beside the optical stretching ones, to introduce additional laser heating, see Figure 22a, and the same wavelength of 1064 nm is coupled in all the fibers. Since the two heating fibers are very close to the active area of cell trapping and stretching, the temperature change can be built up within one second, hence a simple temperature control for optical stretching is obtained by simply changing the power injected in the two extra fibers. The second method obtains the same effect by coupling a second laser beam into one of the two stretching fibers, see Figure 22b. The wavelength of the heating laser is chosen as 1480 nm because its absorption is higher than that of 1064 nm wavelength, leading to a strong heating at low laser power, resulting in negligible optical force from the heating laser. By changing the heating laser power, temperature increase can be easily achieved. Both methods work locally inside the microfluidic circuit and they both allow only to increase the temperature. A different temperature control system that acts on the whole microchip is obtained by mounting it in a precisely regulated temperature environment, such as, a water bath [39], an aluminum sample holder [76,77] or a thermal chamber. Differently from the previously described methods, the thermal response in these cases is much slower and they are therefore intended for long scale temperature control. These setups combined with an optical stretching device have been exploited to evaluate the temperature effect on cell mechanical property on different cell types, including human breast epithelial cell (MCF7, MCF10A), myeloid precursor cells (HL60) and human melanoma cells (A375MC2). Figure 23 reports an example obtained thorough the method based on extra heating-fibers, see Figure 22a. The optical compliance curves are obtained from the optical stretchering measurements both with different stretching laser power and without additional heating and with the same stretching power and different heating laser power. It can be observed that the stretching power increase leads to an strong decrease of cells' stiffness because of their stronger deformation. By applying the additional heating radiation, cells become softer at the same stretching laser power. A thermorheological methodology based on time-temperature superposition [76] has been exploited to explain these results and to propose that the cell creep behavior from optical stretching experiment is mainly due to the temperature effect from laser heating. Under long term temperature treatment and within a certain temperature threshold, cell stretching measurements still show similar result. Consequently, temperature increase induced by laser heating from the cell optical stretching process can have a strong effect on cell mechanical properties and should be always taken into account for a proper characterization. Other Related Studies Additionally to the above mentioned aspects and applications of the OS, new interesting studies are carried out by combining the OS with other techniques so as to integrate new functionalities. One example exploits resonant acoustic waves to prefocus the flowing cells at the correct channel height, so that they all intercept the laser beams and are therefore suitable for laser trapping and stretching. This idea was firstly proved by Khoury et al. [80] in a three layer assembled optical stretcher (a thin piece of PDMS with the microfluidic channel is sandwiched between two glass slides): the acoustic wave was driven by a piezo-ceramic attached beneath the chip. However, the plastic layer, having an acoustic impedance similar to that of water, allowed only for a low-efficiency excitation of the acoustic wave, and also required a fine selection of the bottom and top glass thickness. A new study from Nava et al. recently exploited the same principle into an all-silica optical stretcher [37]. In Figure 24a the layout of the used monolithic OS is shown. Thanks to the use of a hard material, with an acoustic impedance very different from that of water, the efficiency of the acoustic wave excitation was strongly increased with respect to past result [80], and the use of a square-section channel allowed obtaining a 2D prefocusing effect (i.e., acting both in vertical and horizontal direction) as shown in Figure 24b,c. The use of this chip allowed the users to greatly reduce the problems related to the height of flowing cells, thus doubling the OS measurement throughput. It was also demonstrated that the applied acoustic wave had no discernible effect on the cellular optical deformability on either red blood cells or mouse fibroblast cells. Besides, Yang et al. [81] further employed this chip in another recent study measuring both optical deformability and acoustic compressibility on single cells, by optical stretching and acoustophoresis experiments respectively. They found that the cancerous cell MDA-MB231 has both higher acoustic compressibility and higher optical deformability than its normal counterpart MCF7. And also, the optical deformability and acoustic compressibility are not correlated parameters. This result highlights the possibility to increase the functionalities in an optical stretcher to analyze cells in different aspects. Another example of the integration of new functionality in optical stretcher is the optical cell rotator. Differently from the optical trapping realized with two opposing single mode fibers in an assembled optical stretcher, Kreysing et al. [82] replaced one of the two fibers with a dual-mode fiber and spliced it with a defined offset with respect to the original single mode fiber. By rotating this dual mode fiber, the laser beam profile is changed accordingly, leading to an active rotation of the trapped cell. With this modification, they demonstrated that individual cell can be stably held with a well defined orientation or can be rotated perpendicularly to the laser axis. Recently, Kreysing et al. [83] even simplified this optical cell rotator by exploiting a few mode fiber and operating it dynamically beyond the single mode regime to realize precise optical field change without physically rotating the fiber itself (see. Figure 25 ). This ability to precisely orient cells in three dimensions could enable a range of applications in biological and medical research, such as the tomographic reconstruction of cell samples, by imaging them from different angles, the determination of the 3D refractive index distributions of live cells, or wide field fluorescent imaging from multiple angles with subsequent image fusion. Final Discussion Since the seminal works on optical forces by Arthur Ashkin, the possibility to carefully apply controlled forces to biological elements has attracted more and more attention. The development of the optical stretcher configuration, largely investigated by Guck, Kas et al., created the basis for the optimization of a new, promising and flexible tool that can be used to trap, analyze and sort single cells. In this review we highlighted the most commonly exploited fabrication technologies, we described the physical effects at the basis of the optical stretcher working mechanism, and we offered a panoramic view on some of the most relevant applications. Currently, the main limitation of the optical stretcher is its throughput, roughly about one cell per 10 s. Even the optical stretcher is already integrated within a microfluidic circuit and the optical deformation measurement can be realized automatically in real time, this throughput is still much lower than that of flow cytometry, which is based on purely hydrodynamic cell stretching and can offer a significantly high throughput of about 1000 cells per second. In this way, the future development of optical stretcher should be focused on the feature of the precise single cell characterization and, more importantly, the subsequent analysis on the same cell with other different functionalities. Two examples have been highlighted in Section 7, where the optical cell rotator for 3D imaging or the use of acoustic waves for prefocusing in continuous optical stretching measurements and compressibility measurement have been described. In addition, for optical stretcher fabrication, an all-polymer material technique can be possible as demonstrated by a study from Khoury et al. [84] showing that the polymer waveguides induced by deep UV lithography [85] are integrated with microfluidic channels and fully functional for optical manipulation. We strongly believe that the development of microfluidic systems encompassing an optical-stretcher section will largely continue in the future, and will bring to the development of off-the-shelf devices for biologists and material-science researchers, especially thanks to the possibility to include optical stretching in microfluidic devices allowing for high-resolution imaging, 3D surface tomography and integrating multiple actuators systems.
15,181.6
2016-05-01T00:00:00.000
[ "Engineering", "Materials Science" ]
An Antioxidant Phytotherapy to Rescue Neuronal Oxidative Stress Oxidative stress is involved in the pathogenesis of ischemic neuronal injury. A Chinese herbal formula composed of Poria cocos (Chinese name: Fu Ling), Atractylodes macrocephala (Chinese name: Bai Zhu) and Angelica sinensis (Chinese names: Danggui, Dong quai, Donggui; Korean name: Danggwi) (FBD), has been proved to be beneficial in the treatment of cerebral ischemia/reperfusion (I/R).This study was carried out to evaluate the protective effect of FBD against neuronal oxidative stress in vivo and in vitro. Rat I/R were established by middle cerebral artery occlusion (MCAO) for 1 h, followed by 24 h reperfusion. MCAO led to significant depletion in superoxide dismutase and glutathione and rise in lipid peroxidation (LPO) and nitric oxide in brain. The neurological deficit and brain infarction were also significantly elevated by MCAO as compared with sham-operated group. All the brain oxidative stress and damage were significantly attenuated by 7 days pretreatment with the aqueous extract of FBD (250 mg kg−1, p.o.). Moreover, cerebrospinal fluid sampled from FBD-pretreated rats protected PC12 cells against oxidative insult induced by 0.2 mM hydrogen peroxide, in a concentration and time-dependent manner (IC50 10.6%, ET50 1.2 h). However, aqueous extract of FBD just slightly scavenged superoxide anion radical generated in xanthine–xanthine oxidase system (IC50 2.4 mg ml−1) and hydroxyl radical generated in Fenton reaction system (IC50 3.6 mg ml−1). In conclusion, FBD was a distinct antioxidant phytotherapy to rescue neuronal oxidative stress, through blocking LPO, restoring endogenous antioxidant system, but not scavenging free radicals. Introduction Acute ischemic stroke is a leading cause of death in the majority of countries [1]. Evidence affords the involvement of oxidative stress in neuronal injury during brain ischemia/reperfusion (I/R) [2][3][4]. The lethal process was accompanied by elevated free radicals, including superoxide anion (O •− 2 ), hydroxyl radical ( • OH) and hydrogen peroxide (H 2 O 2 ), as well as progressive depletion in endogenous antioxidant system, including antioxidant enzymes, superoxide dismutase (SOD), glutathione peroxidase (GSH-Px) and catalase, or antioxidants, glutathione (GSH), Vitamin (Vit) C and Vit E (α-tocopherol) [5]. Pathological free radicals directly damage neuronal proteins, lipids and DNA; generate toxic lipid peroxides and ultimately contribute to brain infarction and neurobehavioral symptoms. Although free radical scavengers, for example, edaravone [6] or extract of Ginkgo biloba (EGb761) [7], have been demonstrated to be antagonistic to brain I/R, the anti-I/R agents available were still far from sufficient [8]. The beneficial effects of these plants on cerebrovascular disorders have drawn increasing attention in recent research. Clinically, a great deal of traditional herbal formulae comprising Fu Ling, Bai Zhu and Danggui (FBD) were applied to cure ischemia stroke and vascular dementia (VD), mostly with good efficacy. Statistical analysis showed that the three herbs are frequently used in formulae, notably anti-stroke/VD formula Toki-Shakuyaku-San or Yi-Gan San [34,35]. To some extent, clinical neuroprotection of the three herbs was shown to be relevant to their antioxidant properties [10,36,37]. As traditional Chinese nourishingtonifying drugs, crude extracts of Fu Ling, Bai Zhu and/or Danggui have the capacity to inhibit cellular lipid peroxidation (LPO) induced by free radicals, for example, H 2 O 2 [21][22][23], as well as preserve tissue GSH status and GSH-Px activity [11,24]. However, in vitro, their direct free radical scavenging activities are relatively weak due to high concentration in various biochemical reactions, including xanthine-xanthine oxidase (XO) system and Fenton reaction system [12,13]. Therefore, it is hypothesized that FBD exerts its protective effects against I/R-induced neuronal oxidative stress, largely via inhibiting LPO and maintaining endogenous antioxidant system, instead of scavenging free radicals. The primary aim of this study was to evaluate the herbal formula on neuronal oxidative stress induced by middle cerebral artery occlusion (MCAO) in vivo and by H 2 O 2 in vitro. In addition, we evaluated scavenging activities of FBD against O •− 2 generated in xanthine-XO system and • OH generated in Fenton reaction system to assess its antioxidant properties. Preparation of Aqueous Extract of FBD. The three herbal materials used in this work were purchased from Nanjing herbal materials company (Nanjing, China) and authenticated by Prof. Boyang Yu, Department of Pharmacognosy, China Pharmaceutical University. Clinically, a single formula of FBD consisted of 10 g P. cocos, 5 g A. macrocephala and 3 g A. sinensis and the aqueous extract of FBD was prepared the three components were macerated for 30 min, decocted for 30 min with 8 times (v/w) double-distilled H 2 O and the filtrate obtained was concentrated and dried in vacuum at 60 • C into a brown powder, with a yield of 12.5% (w/w). Animals and Pretreatment. Male Sprague-Dawley rats weighting 250-350 g were randomized into four groups: rats in FBD-pretreated groups received FBD (250 mg kg −1 , p.o.), while EGb761-pretreated rats were given EGb761 (24 mg kg −1 , p.o.), as positive control. Sham-operated group and vehicle-pretreated group were given p.o. vehicle 0.5% carboxymenthylcellulose-saline. Vehicle or drugs were administrated once daily for 7 consecutive days. The animal handling procedures were in compliance with the China National Institutes of Healthy Guidelines for the Care and Use of Laboratory Animals. Middle Cerebral Artery Occlusion. One hour after the seventh administration, rats were subject to 1 h right MCAO using the intraluminal filament technique [38]. Briefly, rats were anesthetized with chloral hydrate (400 mg kg −1 , i.p.). The right common carotid artery was exposed at the level of the external and internal carotid artery (ECA and ICA) bifurcation. A 4-0 monofilament nylon suture was inserted into the ECA and advanced into the ICA for 17-20 mm until a slight resistance was felt, to block the origin of the middle Evidence-Based Complementary and Alternative Medicine 3 cerebral artery. One hour after MCAO, the suture was slowly withdrawn. The sham-operated rats did not suffer MCAO, except with exposure to ECA and ICA. Animals were then returned to their cages for 24 h and closely monitored, with body temperature kept at 37 ± 0.5 • C. Neurological and Histological Examination. The neurological deficits in rats were assessed after 24 h reperfusion. Ten rats from each group were assigned a numerical score on a 5-point scale as described: no neurological deficit = 0; failure to fully extend right paw = 1; circling to right = 2; falling to right = 3; did not walk spontaneously and had depressed levels of consciousness = 4 [39]. Then, rats were killed and brain tissue was removed and sliced into 2.0 mm thick coronal sections. Brain slices were incubated in 2% TTC saline solution at 37 • C for 30 min, then fixed in 10% phosphate-buffered formalin for 45 min. Infarct volume in brain slices, outlined in white, were captured with a digital camera and measured by image analysis system (Zeiss AxioVs 40, Oberkochen, Germany) and calculated using the following equation: % infarct volume = infarct volume/slice volume × 100%. Neurochemical Assays. Twenty-four hours after reperfusion, rats were sacrificed and cortical cortexes were collected. A 10% (w/v) homogenate was prepared in ice-cold saline and the supernatant was obtained after centrifugation at 3000 r.p.m. for 15 min. Neurochemical assays were conducted in accordance with the specification of medical kits. When unsaturated fatty acids undergo LPO, MDA is formed. Thiobarbituric acid reaction was used to determine MDA (expressed as μmol g −1 protein) [40]. Nitrite in cortical supernatant was measured after reaction with Griess reagent (sulfanilamide 1%, naphthylethylene diamine 0.01%, H 3 PO 4 5%) with sodium nitrite as a standard, by which NO production might be assessed as micromole per gram of protein [41]. The assay for SOD was based on its ability to inhibit the oxidation of oxymine by the xanthine-XO system (expressed as U mg −1 protein) [42]. GSH (expressed as μmol g −1 protein) was measured through a reaction using dithiobisnitrobenzoic acid, as described by Ball [43]. Protein concentration was measured by Lowry method with bovine serum albumin as standard. Oxidative Insult in PC12 Cells Induced by H 2 O 2 . Neuronlike pheochromocytoma (PC12) cells were provided by Institute of Cells Biology (Shanghai, China). The cells were suspended in Dulbecco Modified Eagle's Medium supplemented with 10% heat-inactivated newborn calf serum, benzylpenicillin (100 kU l −1 ) and streptomycin (100 mg l −1 ) and incubated at 37 • C in 5% CO 2 . PC12 cells were exposed to H 2 O 2 (200 μM) for 1 h to induce oxidative insult, then treated with CSF-FBD (v/v), Vit E (10 μM) or blank CSF. Twenty hours later, MTT assay was performed to observe the cell viability in PC12 cells [37]. Briefly, MTT solution (0.5 mg ml −1 ) was added to each culture well. After incubation for 4 h, the formazan crystals were dissolved by addition of 50 μl dimethyl sulfoxide and read at dual wavelength, 570 nm/650 nm. 2.11. Statistical Analysis. SPSS 12.0 software and Origin 7.0 software were applied to analyze experimental data and results were expressed as mean ± SD. All data were evaluated with analysis of variance (ANOVA) following by Dunnett's ttest for multiple comparisons and P < .05 indicates that the difference was statistically significant. Neuroprotective Effects of FBD In Vivo. Rats surviving more than 24 h awakened from anesthesia with a moderately severe left hemiparesis and circling movements. TTC staining indicated that infarction zone existed in right lobus temporalis cortical and striatal tissues. The neurological score and infarct size in the vehicle-pretreated MCAO rats rose up to 2.6 ± 0.7 and 19.7% ± 2.2%, respectively, indicating that I/R All the data were shown as the mean ± SD, n = 10−12. Significance was evaluated with one-way analysis of variance (ANOVA) following by two-sided Dunnett's t-test. ## P < .01 versus the sham-operated group; * P < .05, * * P < .01 versus the vehicle-pretreated MCAO group. resulted in neuronal injury. In comparison to the vehiclepretreated group, FBD (250 mg kg −1 ) significantly reduced the neurological score by 28.4% (P < .05) and infarct size by 20.1% (P < .01). Its actions were to some extent stronger than those of 24 mg kg −1 EGb761 (by 27.6%, P < .05 and by 18.9%, P < .01, resp., Figure 1). Table 2. After 24 h reperfusion, MDA and NO contents in vehicle-pretreated group rose significantly (P < .01); in contrast, GSH content and SOD activity reduced significantly (P < .01), which implied that oxidative stress occurred. With respect to the vehiclepretreated group, FBD (250 mg kg −1 ) significantly reduced MDA and NO production (P < .01) and restored SOD activity (P < .01) and GSH content (P < .05); likewise, EGb761 significantly suppressed oxidative stress to a similar extent. Antioxidant Activity of FBD Ex Vivo. Incubation with H 2 O 2 for 3 h significantly reduced cell viability. However, when the cells were treated with rat CSF-FBD, the observed cell toxicity was significantly attenuated. As illustrated in Figure 2, CSF-FBD markedly reduced H 2 O 2 injury within 1.5 h in a time-dependent manner (ET 50 1.2 h) and concentration-dependant manner within 20% (IC 50 10.6%). Meanwhile, blank CSF had no obvious influence on the control PC12 cells and Vit E (10 mM) protected PC12 cells by only 25.2%. Free Radical Scavenging Activity of FBD In Vitro. Direct free radical scavenging activity by FBD is shown in Figure 3. At concentrations of 0.05-5.0 mg ml −1 , FBD exhibited concentration-dependent scavenging activities against O •− 2 generated in xanthine-XO system and • OH generated in a Fenton reaction system, with IC 50 2.4 mg ml −1 and 3.6 mg kg −1 , respectively, higher than those of Vit C (IC 50 0.01 mg ml −1 and 0.25 mg ml −1 , resp.). Discussion This study demonstrates the neuroprotective potential of FBD against MCAO-induced oxidative stress in rats, as well as H 2 O 2 -induced oxidative stress in neuron-like PC12 cells. Its neuroprotection appears to reduce LPO and restore endogenous antioxidant system but not scavenge free radicals. Figure 1: Effects of aqueous extract of FBD on neurological score and brain infarction in rats subject to MCAO. Each column represents the mean ± SD of 10-12 rats. Significance was evaluated with one-way ANOVA following by two-sided Dunnett's t-test. * P < .05, * * P < .01 versus the vehicle-pretreated MCAO group. It is well documented that transient focal MCAO results in neurological and histological abnormality. Our results indicated that pretreatment with FBD offered protection against cortical and striatal neuronal damage induced by MCAO, as FBD reduced the neurological score and infarct size (Figure 1), in harmony with other studies [47,48]. Free radical involvement in the development of I/Rinduced brain injury is well investigated [3,4] [50]. We found that focal MCAO induced increases of LPO and NO (Table 2), in agreement with recent studies [51,52]. FBD inhibited NO production but its scavenging activity against either • OH or O •− 2 was feeble compared to that of Vit C (Figure 3), supporting previous findings that P. cocos and A. sinensis were relatively weak natural free radical scavengers [13]. The overproduction of free radicals can be detoxified by endogenous antioxidants, causing their cellular stores to be depleted [52,53] Figure 2: Effect of rat CSF-FBD on PC12 cells induced by hydrogen peroxide. All the data were shown as the mean ± SD, n = 6. Significance was evaluated with one-way ANOVA following by two-sided Dunnett's t-test. * P < .05, * * P < .1, * * * P < .001 versus blank CSF. the most prevalent and important intracellular non-protein thiol, has a crucial role as a free radical scavenger. Here, GSH content and SOD activity were significantly reduced ( Table 2). Similar to EGb761, FBD significantly prevented SOD activity and GSH content decline caused by MCAO. In addition to restoring the endogenous antioxidant system, anti-LPO activity was also implicated in the antioxidant properties of FBD. In 1996, Taylor et al. observed the inhibition of T cells by human CSF, and in 2000, Nakagawa et al. found human CSF altered intracellular calcium regulation in endothelial cells [54,55]. Since CSF is the natural vehicle for CNS agents, both reports enlightened us to design a novel experimental method to evaluate neuroeffectiveness of FBD ex vivo. PC12 cells injured by H 2 O 2 are a typical model used to evaluate anti-LPO activity of drug on neuronal oxidative stress [56,57]. In this work, CSF-FBD attenuated oxidative insult affects PC12 cells in both time-and concentration-dependent manner (Figure 2), in accordance with in vivo finding that MDA level in MCAOsubjected rats was depressed by FBD extract. The exact mechanism by which FBD abated oxidative stress is not yet clear but it is strongly believed that recently identified active compounds may be responsible. Triterpenes from Fu Ling inhibited FeCl 2 -ascorbic acid induced LPO and lysis of red blood cells [14]. Atractylon from Bai Zhu inhibited LPO by CCl 4 in liver lesion and its acetylene compound (6E,12E)-tetradecadiene-8,10-diyne-1,3-diol diacetate suppressed gastric lesions induced by I/R, via inhibition of XO [17,18]. Z-ligustilide from Danggui protected against H 2 O 2 -induced cytotoxicity in PC12 cells and forebrain I/R by enhancing antioxidant defense [25,26]. Coniferyl ferulate is the main antioxidant from essential oil of Danggui [27] and ferulic acid could reduce neuronal damage from exposure to iron, hydroxyl and peroxyl radicals [28,29]. In addition, Danggui polysaccharides protected macrophages against tert-butylhydroperoxide-induced oxidative injury [30,31]. In conclusion, our present findings suggested that FBD might exert protection against neuronal oxidative stress, induced by either MCAO in vivo or H 2 O 2 in vitro. It is a distinct botanical antioxidant agent to reduce LPO and restore the endogenous antioxidant system, without the activity of free radical scavengers. This research expands 6 Evidence-Based Complementary and Alternative Medicine and elaborates the biological model underlying one complementary and alternative medicine treatment for neuronal oxidative stress.
3,675
2011-02-14T00:00:00.000
[ "Biology" ]
Pure and Nanocomposite Thin Films Based on TiO2 Prepared by Sol-Gel Process: Characterization and Applications Titanium dioxide (TiO 2 ) thin films have innumerable applications, and the preparation of nanocomposites based on TiO 2 favors the coupling of different structures that can lead to additional or enhanced properties. The aim of this chapter is to show the preparation and characterization of TiO 2 thin films and some nanocomposites based on anatase-TiO 2 , prepared by sol-gel process using the dip-coating technique. TiO 2 thin films were prepared by sol-gel process onto borosilicate glass, steel, magnet, and silicon substrates from alcoholic starting solutions containing titanium isopropoxide, isopropyl alcohol, and acids to the control of the velocity of gelation. The doped thin films, such as SiO 2 /TiO 2 , Ag/TiO 2 , and Nb/TiO 2 , were prepared adding the dopants in a form of salts or alkoxides in starting solution. The morphological, structural, and textural characterization of the films was made using X-ray diffraction (XRD), high resolution transmission electron microscopy with energy-dispersive spectrometer (EDS) detector, atomic force microscopy/nanoindentation, and UV-Vis spectroscopy. Photoelectrical, mechanical, biological, optical, and surface properties were evaluated. Introduction Titanium dioxide (TiO 2 ) is a multifunctional, semiconductor and polymorphic material, which is commercialized in rutile or anatase phases, both in tetragonal crystal structures. TiO 2 is used in industry since 1918 as pigment in paints, paper, plastic, drugs, cosmetics, etc. In the last years, with the beginning of nanotechnology, powder and films of titanium dioxide have been widely studied due to its new properties obtained by decreasing the particles size. The wide range of application is due to its electronic and structural properties, such as high transmittance in the visible, high refractive index (n = 2.6), high photocatalytic activity, and chemical stability. These properties make TiO 2 an excellent material for use in photocatalysis, antimicrobial surfaces, selfcleaning and hydrophobic surfaces, photovoltaic cells, gas sensor, photochromic devices, etc. [1]. Titanium is the second transition metal on the periodic table and has Ar-3d 2 4s 2 distribution. It was discovered in 1791 by the mineralogist William Gregor, in the region of Cornwall, United Kingdom, in the mineral ilmenite (FeTiO 3 ). In 1795, it was isolated by the German chemist Heinrich Klaproth in the form of TiO 2 rutile phase. Titanium dioxide can be found in three different crystalline phases: anatase, brookite, and rutile. By thermal treatment, it is possible to convert the anatase and brookite phases in rutile, which is thermodynamically stable at high temperatures. The anatase phase is more reactive, mainly in nanometric dimension, and is frequently used in photocatalytic applications. As semiconductor, TiO 2 can be studied in terms of the energy band theory, whose bandgap energy (3.2-3.6 eV) can be supplied by photons with energy in the near ultraviolet range and whose separation between valence and conduction bands is intrinsically linked with its optical and electronic properties. These bandgap values depend on the particle size, phase, and used dopant, making possible the modulation of these values. In the case of thin films, which traditionally are formed by TiO 2 nanoparticles, the thickness also contributes to the modulation of the bandgap values. Several studies are made aiming the best quality of the films and the decrease in the bandgap energy by introduction of dopants in the TiO 2 structures to improve the photocatalytic propriety in the visible region of the light [1,2]. The introduction of dopants in the TiO 2 thin film structure such as SiO 2 , Ag, and Nb, among others, changes its properties expanding the range of possible applications. The methods of preparation also influence significantly its morphology, structure, and texture, modifying its properties. Several methods can be used to obtain thin films such as chemical vapor deposition, sputtering, spray pyrolysis, and sol-gel process. The sol-gel process [3] allows the preparation of thin films with high purity, thermal and mechanical resistance, chemical durability and the control of morphology, composition, thickness, and porosity. Thin film depositions using the sol-gel process can be realized by dip-coating, spin-coating, or spray-coating techniques. These techniques are economically feasible and can be applied to substrates with large surfaces and different forms. Sol-gel process The sol-gel process [3] that leads to the formation of TiO 2 films is based on mechanisms of hydrolysis and polycondensation of titanium alkoxides mixed with alcohol and catalytic agents. There are various kinds of Ti alkoxides such as titanium isopropoxide (Ti(O i Pr) 4 ) and Titanium Dioxide -Material for a Sustainable Environment titanium ethoxide (Ti 4 (OEt) 16 ), among others, that need to be used preferentially with their correspondent alcohol. The precursor solution, also called sol, is a colloidal suspension of Ti surrounded by ligands, with physical-chemical properties adequate to the formation of a film. After a deposition, which can be by dip-coating, spin-coating, and spray-coating processes, the film is formed by a wet gel that became a dry gel after drying process. The hydrolysis of the alkoxide group to form Ti─OH occurs due to nucleophilic substitution of O─R groups (alkyl group) by hydroxyl groups (─OH) and the condensation of the group Ti─OH, to produce Ti─O─Ti and by-products (H 2 O and ROH), leading to formation of the gel, according to the equation below: This mechanism is relatively complex because the reactions occur simultaneously during the process of deposition. In this proposed mechanism, the alkoxide precursor passes by the sequences, oligomer, polymer, and colloid, and it finishes as an amorphous porous solid structure. Thermal treatments are used for the preparation of nanocrystalline thin films. With the use of doping salts in the precursor solutions, the mechanism becomes more complex due to the introduction of other metals in the gel network. The dip-coating technique [4] consists into dip a substrate in the sol and removes it at constant speed (Figure 1), resulting in an M─O─M oxide network that forms a wet gel film. The network structure, the morphology, and the thickness of the film depend on the contributions of the reactions of hydrolysis and condensation that must occur in approximately the same velocity of substrate withdrawal. Otherwise, the solution may run down the substrate. These properties may be controlled varying the experimental conditions: type of organic binder, the molecular structure of the precursor, water/alkoxide ratio, type of catalyst and solvent, withdrawal speed, and solution viscosity. After the deposition, the gel film is formed by a solid structure impregnated with the solvent, and a drying process can be used to convert the wet gel in a dry porous film. Denser film can be tailored by different temperatures of thermal treatment, leading to films with different specific surface areas and porosities. The advantage of the dip-coating process is the ease of deposition in substrates of any size and shape, facilitating the industrial process. Experimental TiO 2 thin films were prepared by sol-gel process [2,5] using titanium isopropoxide (Aldrich, 98%) as the precursor of titania mixed with isopropyl alcohol and hydrochloric acid in stoichiometric amounts. The precursor solution was kept under agitation at room temperature for 1 h and rested until the viscosity reaches the best value condition, between 2 and 5 cP. The films were prepared using solutions with 2 < pH < 4 and atmosphere relative humidity <40%, since they are opaque and not adherent for other pH and relative humidity values. The films were deposited onto clean substrates (borosilicate glass, steel, silicon, and magnets) at room conditions (25°C, relative air humidity lower than 30%), using a dip-coating equipment with withdrawal speed between 0.2 and 1.5 mm/s. The substrates were washed with standard cleaning method before dipping. After each dip-coating process, the wet films were dried in air for 30 min and thermally treated at temperatures between 100 and 500°C for a range of time (between 10 and 60 min) to convert them into porous or densified oxide films. Depending on the thermal treatment temperature, the films can be amorphous or nanocrystalline. Some samples were submitted at UV-C light (lamp Girardi RSE20B, 254 nm-15 W) to crystallize without increasing the temperature. Crystalline structures were investigated by an X-ray diffraction (incidence angle of 5°) using a diffractometer Rigaku (Geigerflex model 3034). The samples were analyzed by atomic force microscopy (AFM) in an Asylum Research, model MFP-3D-SA, to observe the topography and possible coating defects, such as cracks and peeling. Morphological characterization was evaluated by transmission electron microscopy (FEI TECNAI G2 20 at acceleration tension of 200 kV). Electron diffraction was also used to determine the structure of the crystalline phases. The films were pulled from glass substrates and mounted onto 200 mesh copper grids coated with holey carbon films for examination. The morphology and composition were evaluated by a scanning electron microscope (SEM) FEI Quanta 200 FEG with an energy-dispersive spectrometer (EDS). The transparency and thickness of the films deposited on glasses were verified by the optical transmission spectra measured with an ultraviolet and visible spectrometer (U3010, Hitachi). Results The TiO 2 films obtained by sol-gel process using the dip-coating technique are transparent, homogeneous, adherent, durable, and free of micro-cracks. Figure 2a shows thin films removed Titanium Dioxide -Material for a Sustainable Environment from a glass substrate. The thickness of the films deposited in glass and dried in air can range from 40 to 800 nm for each coating, depending on the withdrawal speed and viscosity. After heating, the film thickness decreases due to the densification process, reaching values between 20 and 300 nm each coating. When the number of coating increased, the thickness can reach 800 nm after calcination without cracks. After drying, the films are porous when treated at low temperatures, and the density increases as a function of heating temperature and time. The porosity of the films leads to a variation in the refraction index that can change from 1.9 to 2.3 (λ = 550 nm) for porosities between 20 and 5%, respectively. Figure 2b shows an example of the variation of thickness and refractive index in the function of thermal treatment temperature of TiO 2 film. When the TiO 2 films are deposited in substrates that cannot be thermally treated, such as polymers and cotton, the densification and crystallization can be made by UV light treatment. Figure 3 shows images of TiO 2 films heated at 100 and 400°C for 10 min. The film formed after drying at room temperature is amorphous and contains organic contaminants in the network. With increasing in temperature of thermal treatment, the film structure changes to anatase phase around ~300°C and to rutile phase above ~600°C. According to the literature, the values of the phase transition of TiO 2 can change in some degrees also depending on the type and time of drying, used dopant, and particle size, among others factors. Figure 4 shows typical diffractograms of TiO 2 films deposited in glass substrate in two temperatures, generating an amorphous material at 100°C and a nanocrystalline material at 400°C. Figure 5 shows SEM and TEM images of the film and the respective electron diffraction that confirm its anatase phase. TiO 2 thin films are used in the confection of optical devices (linear and nonlinear) due to the transparency throughout the visible spectrum, high linear and nonlinear refractive index that change in function of the wavelength, and dielectric properties. Their nonlinearity can make possible operations such as logic, all-optical switching, and wavelength conversion. Their high linear index of refraction can improve optical confinement as waveguide. The optical and electric properties of the thin films made by sol-gel process can be modulated according to the desired application. Figure 6 shows transmittance curves of TiO 2 thin films deposited on glass substrates as a function of the number of layers. Each layer measures approximately 60 nm. By these spectra it is possible to calculate the bandgap of the films using, for example, the Tauc method. The value measured in this case was 3.4 eV, meaning that the photocatalytic activity occurs at a wavelength in the UV region. Several studies are made aiming to reduce Titanium Dioxide -Material for a Sustainable Environment the bandgap of the TiO 2 anatase phase to the visible region to make it a competitive energy source with application in photocatalysis, solar cells, and artificial photosynthesis. TiO 2 films are also used in the preparation of hydrophobic and self-cleaning surfaces in several substrates. Figure 7 shows water drops over the film surface and over the glass substrate surface. The contact angle can be change varying the film porosity and the number of layers, for example. The TiO 2 self-cleaning surfaces have the ability to remove greasy dirt and bacteria from their surfaces due to their photocatalytic property, which promotes the breakdown of fat molecules or destroys the membranes of bacteria. The self-cleaning property is frequently connected to hydrophobic surfaces, because the dusts can be removed by the rolling of the water droplets in the surface. SiO 2 /TiO 2 thin films When Si alkoxide is mixed with Ti alkoxide to the preparation of precursors of TiO 2 /SiO 2 thin films for utilization of the sol-gel process, the nanocomposites produced can combine or enhance the properties of the well-known pure oxides: TiO 2 and SiO 2 [6]. These nanomaterials can offer enhanced photocatalytic activities, persistent superhydrophilicity, modulated refractive index, enhanced resistance to corrosion, and superior mechanical properties such as larger mechanical resistance and hardness. The deposition of TiO 2 /SiO 2 thin films in different substrates such as glasses, metals, ceramics, and polymers enables the application of these films in many purposes such as self-cleaning surfaces, antireflection surfaces, anticorrosion protection, wear resistance protection, fungicide and bactericide surfaces, water and air treatment devices, planar waveguides, nonlinear optical devices, etc. The most important fact is that two or more of these applications can be combined in TiO 2 /SiO 2 multifunctional surfaces [7,8]. In this work, the preparation of TiO 2 /SiO 2 nanocomposite thin films was made using titanium isopropoxide (Aldrich, purity >98%), isopropyl alcohol and hydrochloric acid, to prepare the TiO 2 precursor solution and tetraethyl orthosilicate (Aldrich, purity >98%), isopropyl alcohol, with different xTiO 2 /(100-x)SiO 2 molar ratios (x = 0, 20, 40, 60, 80, and 100%) were prepared and stirred for 1 h. The final viscosity of the solutions was maintained in approximately 2.2 cP. The films were deposited on properly clean glass substrates with a constant withdraw speed of 1.0 mm/s at 25°C and relative air humidity about 30%. The drying process occurred at 80°C in air for 10 min. This stage (deposition and drying) was repeated five times for thickness control. Finally, the samples were thermally treated at 500°C for 1 h. The TiO 2 thin films were formed by anatase phase, and the SiO 2 thin films were amorphous according to XRD patterns and Raman spectroscopy results. The TiO 2 /SiO 2 thin films are formed by the anatase phase dispersed in a vitreous matrix. The anatase phase is fundamental for the desired applications due to their optical and photocatalytic property. The microstructure, morphology, and texture of the xTiO 2 /(100-x)SiO 2 thin films change substantially due to the mixture of the titanium and silicon oxide, as seen in AFM images of Figure 8. With the addition of SiO 2 , the titania nanoparticles remain dispersed in the vitreous matrix, and because of that, TiO 2 and SiO 2 pure films have higher root-mean-square (RMS) roughness (2.2 and 6.0 nm, respectively) than TiO 2 /SiO 2 films (between 0.2 and 1.2 nm). The surface smoothing, after the mixture of TiO 2 and SiO 2 , resulted in an enhanced hardness that changes to 4.5 GPa for both pure films and to approximately 7.4 GPa for all nanocomposite thin films. These properties are essential for outdoor applications, special windows, glasses of cars, and other vehicles, among others, since the film surfaces can be subjected to intense mechanical wear of air particles. Moreover, TiO 2 /SiO 2 nanocomposite thin films present a persistent superhydrophilicity, which is required for application on self-cleaning surfaces and water/air treatment, promoting a better washing of the contaminants in the surface, which can be obtained with the rain precipitation. These nanocomposites increase the adsorption of pollutants by the surface. The optical properties of the xTiO 2 /(100-x)SiO 2 films are modulated by Ti/Si rate variation, as seen in Figure 9. The possibility to modulate the transmittance and refractive index (n) of the xTiO 2 /(100-x) SiO 2 thin films is essential in applications as antireflection surfaces, filters, and planar waveguides, since this wide variation of n (from 1.45 to 2.18 in visible light) permits the construction of different structural models of devices. The variation in the refraction index in function of incident light wavelength of 2.0-2.8 (Figure 9b) is also very important to the construction of nonlinear optic devices [9]. Ag/TiO 2 thin films TiO 2 exhibits a high energy bandgap (3.2-3.8 eV) which corresponds to UV irradiation with a wavelength smaller than 388 nm. To overcome this limitation, several studies have been performed showing the modification of TiO 2 with metal and nonmetal species aiming to extend the light absorption to the visible range and simultaneously increasing the recombination time of the electron-hole pairs formed. In particular, nanocomposite thin films of silver and titania have been of considerable interest since silver nanoparticles can act as electron traps, contributing to electron-hole separation and creating a local electric field capable of facilitating the electron excitation and consequently their photocatalytic properties. The improvement in the photocatalytic properties leads to surfaces with better bactericide, hydrophobicity, and self-cleaning characteristics [10]. Ag/TiO 2 coatings were prepared from alcoholic solution containing titanium isopropoxide and silver nitrate dissolved in a mixture of isopropyl alcohol in several atomic ratios. Acid conditions (pH = 4) were reached after acetic acid addition. This precursor solution was stirred at room temperature during 1 h and submitted to UV-C irradiation (254 nm) treatment in air for 100 min. This procedure has been used to produce metallic Ag from Ag + ions. The films were deposited onto clean substrates as borosilicate, silicon, 316 L stainless steel, and magnets (NdFeB) with withdrawal speed of 8 mm s −1 . After deposition, the coatings with one to five layers were dried in air for 20 min and were thermally treated for 1 h between 100 and 400°C [5,11]. Figure 10 shows the characteristic diffractogram of Ag/TiO 2 thin films with five layers deposited on glass and heated at 400°C. According to XRD patterns, the coatings heated at 400°C show indexed peak characteristic of crystalline metallic Ag and anatase phase (PDF #1-562). The diffractogram of the film heated at 100°C was characteristic of a noncrystalline material, as expected. The substrates of 316 L stainless steel and magnets showed similar XRD patterns. SEM images of Ag/TiO 2 heated at 400°C deposited on different substrates are shown in Figure 11. The structure of the used substrates has induced the formation of nano-and microstructures of metallic silver with different sizes and morphologies supported on the TiO 2 thin film surfaces. This formation occurs due to thermal treatment that induces the diffusion of the metal nanoparticles to the film surface. In the borosilicate substrate (Figure 11a), the formation of spherical Ag nanoparticles with a bimodal particle size distribution is observed. When substrates of 316 L stainless steel and magnets (NdFeB) were used, Ag dendrite micro-and nanostructures were formed (Figure 11b and c). A trimodal size distribution is observed for the particles present on the surface of the Ag/TiO 2 film deposited on silicon (Figure 11d). Particularly in this film, the Ag particles show dimensions of 5-150 nm. Pure and Nanocomposite Thin Films Based on TiO 2 Prepared by Sol-Gel Process… http://dx.doi.org/10.5772/intechopen.74335 Energy-dispersive spectra (EDS) shown in Figure 12 has confirmed the elemental composition of the Ag/TiO 2 films treated at 400°C deposited on 316 L stainless steel. In this film's circular, micrometric and submicrometric structures also are observed besides the dendrites mentioned above. Brightness regions on the micrograph are constituted only by Ag, while the other regions are formed by TiO 2 matrix in the anatase phase, according to the XRD results. The analyses for the other substrates were similar. Titanium Dioxide -Material for a Sustainable Environment Figure 13 shows AFM images of Ag/TiO 2 thin films with one layer deposited on 316 L stainless steel substrate. The surface roughness of the 316 L stainless steel, whose texture is shown in Figure 13a and b, is ~40 nm, a much higher value compared to the roughness value of the borosilicate substrate, which is about 0.20 nm. It is observed that the Ag/TiO 2 films deposited on the steel substrates reduce their roughness as a function of the number of layers deposited. With four layers, the roughness value decreases to 7 nm. In addition, the Ag/TiO 2 films are formed by silver nanoparticles dispersed on the surface of the TiO 2 matrix with sizes between 20 and 50 nm. The introduction of silver in the TiO 2 structure changes their optic properties as can be seen in Other utilizations of Ag/TiO 2 thin films are in hydrophilic/hydrophobic surfaces and in bactericide and fungicide devices [5], since the silver increases the TiO 2 efficiency. Nb/TiO 2 thin films Traditionally, the niobium is used mainly in the confection of metallic alloys for several industrial applications [12]. However, the use of niobium to produce ceramic materials is increasing in the last few years with several applications into catalysis, supercapacitor, and battery components, among others. The incorporation of the niobium in other material structures, causing substitutional defects, has been studied to improve several material properties, such as TiO 2 . Examples of applications of Nb-doped TiO 2 are its use as photocatalyst, dye-sensitized solar cells, gas sensors, magnetic properties, and transparent conductive oxide (TCO) for several electronic devices. Several methods are being used to synthesize and deposit Nb-doped TiO 2 thin films in different types of substrates. However, the most used deposition methods are chemical vapor deposition (CVD), sputtering, and sol-gel process. In the sol-gel synthesis of Nb-doped TiO 2 , the use of mainly two niobium precursors, niobium ethoxide [Nb(OCH 2 CH 3 ) 5 ], and niobium pentachloride (NbCl 5 ) that are very expensive is reported in the literature [13]. In this work, Nb/TiO 2 coatings were prepared from alcoholic solution containing titanium isopropoxide and ammonium-(bisaquo oxobisoxalato) niobate-trihydrate (produced by CBMM, Brazil) dissolved in a mixture of isopropyl alcohol. Acid conditions (pH = 4) were reached after acetic acid addition. The precursor solution was stirred at room temperature during 1 hour and deposited by dipcoating process in clean glass substrates with withdrawal speed between 0.8 and 3.7 mm s −1 . After deposition, the coatings with one to five layers were dried in air for 20 min and were thermally treated for 1 h between 100 and 500°C. The Nb-TiO 2 thin films obtained are transparent, adherent, free of micro-cracks, and with visual appearance more homogeneous than the other deposited thin films. The niobium increases the mechanical resistance of the surface. A theoretical study using density functional theory (DFT) showed that the insertion of niobium in the titanium dioxide matrix, causing the substitution of Ti 4+ cations for Nb 5+ cations, changes its lattice parameters, cell volume, and bandgap [14]. Therefore, the structures of the materials calcined at 500°C were found to be crystalline in the anatase phase (PDF #1-562). The thin films doped with 0.5, 1, and 3% molar ratio Nb:Ti showed a displacement of the 101 and 200 peaks to lower angles, evidencing the substitution of the niobium inside the crystal structure, as shown in Figure 15. The increase of niobium content in the thin film promoted a considerable variation in the lattice parameters, whose d 101 changed to 3.49 for pure TiO 2 and to 3.55 for 3% Nb/TiO 2 . The crystallite size decreased from 11 to 7 nm, which agreed with the DFT results previously reported. AFM 3D micrographs (Figure 16a and b) show that the TiO 2 has larger particle size and RMS roughness of 2.2 ± 0.1 nm, while the 2% Nb/TiO 2 film presents a RMS roughness of 0.6 ± 0.2 nm and smaller nanoparticles. All Nb/TiO 2 thin films presented different profiles than TiO 2 thin films, with smaller nanoparticles and RMS roughness and, therefore, more homogeneity, adherence, and visual quality. UV-Vis spectra seen in Figure 17 show that also it is possible to modulate the transmittance of the thin films as a function of the wavelength to obtain optical filters. All studied films showed similar bandgap values obtained by the Tauc method, between 3.6 and 3.4 eV. The insertion of niobium on the TiO 2 structure led to a denser film with higher refractive index and high mechanical resistance. Conclusion The sol-gel deposition parameters such as the density of the precursor solution, concentration of oxides, viscosity, withdrawal velocity, number of dips, and drying temperature influence the characteristics of the films such as thickness, porosity, refractive index, particle size, particle shape, and oxidation degree. Someway, all dopants used improved the quality and the range of application of the TiO 2 films. The addition of SiO 2 in the TiO 2 films changes their mechanical, optical, and surface properties. The addition of Ag increases its photocatalytic activity, improving fungicide and bactericide properties of the films. The hydrophobicity/hydrophilicity change capacity was improved too. The doping with Nb improves the mechanical resistance of the films. All these properties can be applied in the confection of best photocatalytic surfaces to be used in the production of solar energy, self-cleaning surface, and optical and nonlinear optical devices.
5,964
2018-03-02T00:00:00.000
[ "Materials Science" ]
Genetic Diversity, Haplotype Relationships, and kdr Mutation of Malaria Anopheles Vectors in the Most Plasmodium knowlesi-Endemic Area of Thailand Plasmodium knowlesi, a malaria parasite that occurs naturally in long-tailed macaques, pig-tailed macaques, and banded leaf monkeys, is currently regarded as the fifth of the human malaria parasites. We aimed to investigate genetic diversity based on the cytochrome c oxidase subunit I (COI) gene, detect Plasmodium parasites, and screen for the voltage-gated sodium channel (VGSC)-mutation-mediated knockdown resistance (kdr) of Anopheles mosquitoes in Ranong province, which is the most P. knowlesi-endemic area in Thailand. One hundred and fourteen Anopheles females belonging to eight species, including An. baimaii (21.05%), An. minimus s.s. (20.17%), An. epiroticus (19.30%), An. jamesii (19.30%), An. maculatus s.s. (13.16%), An. barbirostris A3 (5.26%), An. sawadwongporni (0.88%), and An. aconitus (0.88%), were caught in three geographical regions of Ranong province. None of the Anopheles mosquitoes sampled in this study were infected with Plasmodium parasites. Based on the sequence analysis of COI sequences, An. epiroticus had the highest level of nucleotide diversity (0.012), followed by An. minimus (0.011). In contrast, An. maculatus (0.002) had the lowest level of nucleotide diversity. The Fu’s Fs and Tajima’s D values of the Anopheles species in Ranong were all negative, except the Tajima’s D values of An. minimus (0.077). Screening of VGSC sequences showed no presence of the kdr mutation of Anopheles mosquitoes. Our results could be used to further select effective techniques for controlling Anopheles populations in Thailand’s most P. knowlesi-endemic area. Introduction Four species of malaria parasite have long been known to cause human health issues, including Plasmodium vivax, P. ovale, P. malariae, and P. falciparum [1]. Plasmodium knowlesi, a malaria parasite that occurs naturally in long-tailed macaques (Macaca fascicularis), pigtailed macaques (Ma. nemestrina), and banded leaf monkeys (Presbytis melalophos), is now regarded as the fifth human malaria parasite [2,3]. The first naturally acquired human infection was documented in 1965 when a traveler acquired P. knowlesi after a brief stay in peninsular Malaysia [4]. Human P. knowlesi infections are prevalent in Southeast Asian countries such as Thailand, Cambodia, Myanmar, the Philippines, Singapore, Vietnam, Malaysia, and Indonesia. [3]. In addition, this zoonotic malaria parasite has also been reported in other regions after being carried by travelers who visited Southeast Asian countries such as Malaysia [5,6] and Thailand [7,8]. Mosquitoes of the genus Anopheles are responsible for spreading malaria to humans. Although wild Plasmodium-infected Anopheles mosquitoes have been extensively surveyed to validate their role as malaria vectors, few studies have been able to confirm vectors of P. knowlesi due to a lack of appropriate molecular tools [9]. In 1961, Anopheles hackeri was identified as the natural vector of simian malaria P. knowlesi in peninsular Malaysia, based on sporozoites inoculated into a rhesus monkey [10]. However, this Anopheles species cannot transmit P. knowlesi to humans because it feeds mainly on monkeys and does not attack humans. The confirmation of P. knowlesi vectors using molecular techniques was begun in 2006 by Vythilingam et al. [11], who discovered that An. latens is a vector of P. knowlesi in Sarawak, Malaysia, using a nested polymerase chain reaction (PCR) assay. In 2011, Marchand et al. [12] reported P. knowlesi infections in An. dirus sensu stricto (s.s.) in Southern Vietnam. Jiram et al. [13] confirmed that An. cracens is a vector of P. knowlesi in Kuala Lipis in peninsular Malaysia. In 2009, P. knowlesi infections were also found in An. sundaicus sensu lato (s.l.) in Katchal Island, India [14]. Recently, An. balabacensis and An. donaldi were identified as vectors of P. knowlesi in Lawas, Northern Sarawak, Malaysian Borneo, based on the detection of Plasmodium DNA in the salivary glands of wild Anopheles mosquitoes using a nested PCR assay [9]. As noted earlier, Anopheles mosquitoes, confirmed to be P. knowlesi vectors in the past, are only found in Malaysia, with single reports from Vietnam and India. Therefore, other countries should continue to investigate Anopheles mosquito vectors to control knowlesi malaria effectively. Since Anopheles vectors behave differently in different regions, a malaria vector in one region may not be a malaria vector in another [15]. Thailand is a malaria-epidemic country, especially in border areas, caused by P. vivax and P. falciparum [16]. Nevertheless, the trend of P. vivax and P. falciparum malaria cases is one of annual decrease. Thus, Thailand's Ministry of Public Health set a goal of eliminating malaria by 2024. However, the surge of knowlesi malaria cases may hinder Thailand's efforts to eliminate malaria. In 2004, the first case of a human P. knowlesi infection was reported in Thailand. The patient had a travel history that included a few weeks in the forest in Prachuap Khiri Khan province [17]. After the first case was reported, there continued to be a few humans infected per year (<10 cases) until 31 cases were reported in 2018. The following year, P. knowlesi infection rates remained high (19 cases in 2019), and they began to increase in 2020 (22 cases) and 2021 (72 cases). In January-October 2022, 140 P. knowlesi infected patients were reported [18]. Although the number of P. knowlesi infected patients is currently on the rise, there is no information available on the natural vectors of P. knowlesi in Thailand, which makes controlling the disease difficult. Ranong is one of Thailand's southern provinces, near the Myanmar border, and is the most P. knowlesi-endemic area in Thailand, with 53 infected patients in 2022 (accounting for 96.36% of total cases during January-October 2022). In contrast, other malaria infections are rare (one case of P. vivax and another of P. falciparum in 2022). A substantial portion of Ranong is forested area, a vital habitat for the primary malaria vectors in Thailand, including An. dirus, An. minimus, and An. maculatus [15,16]. Meanwhile, a portion of Ranong is coastal area, which is the habitat of An. epiroticus, a secondary malaria vector in Thailand [19]. For malaria control to be successful, comprehensive knowledge of Anopheles vectors is necessary [15]. However, in-depth information on Anopheles mosquitoes in Thailand's most P. knowlesi-endemic area is still lacking. The genetic diversity of insect vectors in endemic areas is critical, providing useful information about the taxonomic status of species and the spatial limits of natural populations [20]. This knowledge permits researchers to understand and predict the epidemiology, distribution, and transmission dynamics of vector-borne diseases based on the basic biology of the vectors [20]. The cytochrome c oxidase subunit I (COI) gene is a frequently utilized marker in molecular studies on the genetic diversity of insects, including Anopheles mosquitoes, due to its high accuracy [21][22][23]. In addition, genetic monitoring of Anopheles mosquitoes also allows for more effective vector control strategies. Malaria vector control via the use of insecticide-treated nets (ITNs), long-lasting insecticide nets (LLINs), and indoor residual spraying (IRS) of insecticides is the primary technique for reducing malaria transmission [24]. However, insecticide-resistant Anopheles mosquitoes have been reported in many countries [25]. The voltage-gated sodium channel (VGSC) is the main target for both pyrethroid and dichlorodiphenyltrichloroethane (DDT) insecticides [25]. Molecular studies can help to examine the polymorphisms associated with the resistance of several insects, including Anopheles mosquitoes, against pyrethroids and DDT, also called knockdown resistance (kdr), based on genetic mutations of codon 1014 in the VGSC gene [26][27][28]. To optimize entomological information for vector control strategies in Thailand's most P. knowlesi-endemic area, in-depth molecular information on Anopheles mosquitoes is required. The present study aimed to investigate genetic diversity based on COI, detect Plasmodium parasites, and screen for VGSC-mutation-mediated knockdown resistance of Anopheles mosquitoes in Ranong province, which is Thailand's most P. knowlesi-endemic area. Ethics Statement The current investigation was conducted in compliance with the conditions outlined in the guidelines for animal care and usage in research developed by the Suan Sunandha Rajabhat University in Thailand. The Institutional Animal Care and Use Committee of the Suan Sunandha Rajabhat University in Bangkok, Thailand, reviewed and approved all experimental procedures and fieldwork beforehand (Animal Ethics Permission number: IACUC 64-010/2021). Study Sites and Sample Collection We conducted our study in Ranong province, Thailand's most P. knowlesi-endemic area [18]. Ranong is the northernmost province on Thailand's Andaman coast and shares a border with Myanmar. It is located around 580 km from Bangkok, Thailand. Three different locations in Ranong province were selected for Anopheles collection, including northern (10 • 45 We conducted adult Anopheles collections once every two months between January and June 2022 in accordance with the survey plan of the Ranong Vector Borne Disease Control Center. Anopheles mosquitoes from three different locations in Ranong province ( Figure 1) were collected throughout the night between 18:00 and 6:00 over five nights, using 12 BG-Pro CDC-style traps (BioGents, Regensburg, Germany) with BG-lure cartridges (BioGents, Regensburg, Germany) and solid carbon dioxide (dry ice). The mosquito bags were removed from the traps in the morning (6:00 a.m.) and kept in the freezer at −20 • C until the mosquitoes died. Then, the gathered mosquito samples were brought to the College of Allied Health Sciences laboratory at Suan Sunandha Rajabhat University in Thailand and stored in the freezer at −20 • C until further use. cartridges (BioGents, Regensburg, Germany) and solid carbon dioxide (dry ice). The mosquito bags were removed from the traps in the morning (6:00 a.m.) and kept in the freezer at −20 °C until the mosquitoes died. Then, the gathered mosquito samples were brought to the College of Allied Health Sciences laboratory at Suan Sunandha Rajabhat University in Thailand and stored in the freezer at −20 °C until further use. Morphological and Molecular Species Identification The initial identification of wild-caught Anopheles mosquitoes at the species/group level was performed via morphological examination under a stereo microscope (Nikon Corp., Tokyo, Japan), using an illustrated key of adult Anopheles from Thailand [29]. Each morphologically identified Anopheles specimen was kept individually in a 1.5 mL microcentrifuge tube with silica gel (one specimen/tube) and stored at −20 °C until required. Next, all Anopheles specimens were reconfirmed using molecular methods to distinguish sibling species and prevent operator mistakes. Genomic DNA was extracted from the legs of individual Anopheles mosquitoes using the FavorPrep™ mini kit (Favorgen Biotech, Ping-Tung, Taiwan) according to the manufacturer's protocol. Multiplex allele-specific PCR (MAS-PCR) assays based on the internal transcribed spacer 2 (ITS2) region of DNA were used to identify the following: (1) Morphological and Molecular Species Identification The initial identification of wild-caught Anopheles mosquitoes at the species/group level was performed via morphological examination under a stereo microscope (Nikon Corp., Tokyo, Japan), using an illustrated key of adult Anopheles from Thailand [29]. Each morphologically identified Anopheles specimen was kept individually in a 1.5 mL microcentrifuge tube with silica gel (one specimen/tube) and stored at −20 • C until required. Next, all Anopheles specimens were reconfirmed using molecular methods to distinguish sibling species and prevent operator mistakes. Genomic DNA was extracted from the legs of individual Anopheles mosquitoes using the FavorPrep™ mini kit (Favorgen Biotech, Ping-Tung, Taiwan) according to the manufacturer's protocol. Multiplex allele-specific PCR (MAS-PCR) assays based on the internal transcribed spacer 2 (ITS2) region of DNA were used to identify the following: (1) five sibling species of the Dirus complex, including An. dirus s.s., An. baimaii, An. cracens, An. nemophilous, and An. scanloni; (2) five species of the Maculatus group, including An. maculatus s.s., An. dravidicus, An. pseudowillmori, An. rampae, and An. sawadwongporni; and (3) five species of the Funestus group, including An. minimus s.s., An. harrisoni, An. aconitus, An. pampanai, and An. varuna, according to the previous protocols of Walton et al. [30], Walton et al. [31], and Garros et al. [32], respectively. For the molecular identification of other Anopheles species, we compared COI Anopheles sequences to the barcode reference library. Detection of Malaria-Infected Anopheles Mosquitoes For screening of Plasmodium sporozoites in Anopheles mosquitoes, the fast COX-I PCR method was used, as described previously by Echeverry et al. [33]. We extracted Plasmodium DNA from the head and thorax of each female Anopheles mosquitoes. An approximately 520 bp segment of the Plasmodium DNA COI region was amplified using the primer pair COX-IF (5 AGA ACG AAC GCT TTT AAC GCC TG 3 ) and COX-IR (3 ACT TAA TGG TGG ATA TAA AGT CCA TCC wGT 5 ). The PCR amplifications were conducted using a thermal cycler (Biometra TOne Series, Germany) in a total volume of 25 µL, containing 4 µL of DNA template, a 1 µM concentration of each primer, 1x blood Phusion PCR Master Mix (Thermo Scientific, Waltham, MA, USA), and distilled water up to 25 µL. The PCR reaction conditions were as follows: initial steps at 98 • C at 4 min followed by 70 cycles of 98 • C at 1 s, 69 • C at 5 s, and 72 • C at 35 s, with a final extension at 72 • C at 10 min. Each PCR contained negative (water without DNA) and positive (DNA of P. falciparum from culture) controls. PCR products were spread by electrophoresis on 1% agarose gels stained with Midori Green DNA stain (Nippon Gene, Tokyo, Japan), under an ImageQuant LAS 500 imager (GE Healthcare Japan Corp., Tokyo, Japan). A specimen showing a clear DNA band size of 540 bp on agarose gel was considered infectious (Plasmodium-genus-positive). If a positive sample had been found, PCR products would have been sent to a service company for DNA sequencing, and then those sequences would have been used for species assessments of Plasmodium parasites by comparing them to reference sequences in the Barcode of Life Data System database (BOLD). Polymerase Chain Reaction (PCR) and Sequencing of COI and VGSC Genes Genomic DNA derived from the legs of each Anopheles specimen was used to amplify COI and VGSC gene fragments. The PCR amplification of approximately 709 bp of the COI gene was performed using two primers, including forward primer COI_F (5 -GGA TTT GGA AAT TGA TTA GTT CCT T-3 ) and reverse primer COI_R (5 -AAA AAT TTT AAT TCC AGT TGG AAC AGC-3 ) [34]. PCR was conducted according to the previously reported procedure of Chaiphongpachara et al. [35]. The PCR amplification of an approximately 300 bp fragment flanking codon 1014 of the VGSC gene was performed using two primers, including forward primer AgF_kdr PCR amplification products of COI and VGSC were visualized on 1% agarose gels stained with Midori Green DNA stain (Nippon Gene, Tokyo, Japan) under an ImageQuant LAS 500 imager (GE Healthcare Japan Corp., Tokyo, Japan) for quality evaluation, before being sent to Solgent Company in Daejeon, South Korea, for the purification of PCR products and DNA sequencing. Molecular Analyses The trace files of COI and VGSC sequences for Anopheles specimens were manually aligned, checked, and edited, and consensus sequences were created from forward and reverse sequences using the BioEdit version 7.2 [36]. Afterward, COI and VGSC consensus sequences were aligned and manually edited using Clustal X [37] in the MEGA X (Molecular Evolutionary Genetics Analysis) software [38]. We compared the COI sequences of our Anopheles specimens to those available in GenBank to confirm species identification using the Basic Local Alignment Search Tool (BLAST, available online http://blast.ncbi.nlm.nih.gov/Blast.cgi (accessed on 10 October 2022) and the National Center for Biotechnology Information (NCBI) and BOLD (https:// www.boldsystems.org/ (accessed on 10 October 2022) databases. Acceptance of Anopheles specimens required ≥98% nucleotide sequence identity for the available species sequences in the databases [39]. In addition, the intraspecific and interspecific genetic distances of all Anopheles species were calculated using the Kimura two-parameter distance algorithm (K2P) in MEGA X. We constructed a phylogenetic tree based on maximum likelihood (ML) with Tamura three-parameter plus gamma distribution plus invariable site model (best-fit substitution model) for COI sequences, using bootstrapping values defined for 1000 repetitions in MEGA X, in order to examine the evolutionary relationships among Anopheles species. We used DNA Sequences Polymorphism (DnaSp) 6 software [40] to calculate the number of polymorphic (segregating) sites (s), nucleotide diversity (π), number of haplotypes (h), haplotype diversity (Hd), the average number of nucleotide differences (k), and statistical tests of neutrality, namely Tajima's D test [41] and Fu's Fs test [42], based on the mitochondrial COI gene, to investigate the genetic diversity of Anopheles mosquitoes in each species. In addition, haplotype networks of each Anopheles species were created using the median-joining network method in PopArt 1.7 to visualize the relationships among Anopheles individuals. For screening of kdr mutations in the VGSC gene, we investigated the VGSC sequences of all the specimens to find known resistant mutations (L1014C, L1014F, and L1014S). Anopheles Mosquitoes In this study, 114 Anopheles females were caught in three geographical regions of Ranong province. Molecular identification revealed that the Anopheles specimens represented eight species, including An. baimaii Table 1). Their distributions in Ranong province, as obtained in this study, are presented in Figure 1. The southern part of Ranong had the highest number of Anopheles mosquito species (n = 7), followed by the northern (n = 4) and central (n = 2) parts, respectively. Anopheles baimaii was the only Anopheles species found across all three parts of Ranong province. In contrast, An. aconitus and An. sawadwongporni were extremely rare, with just a single specimen found in the southern and northern parts of the province, respectively. Thus, An. aconitus and An. sawadwongporni, represented by only one specimen each, were excluded from genetic diversity analyses. Malaria Parasite Detection According to the fast COX-I PCR method results, none of the 114 Anopheles mosquitoes examined were infected with Plasmodium parasites. Phylogenetic Analysis The ML phylogenetic tree showed that the species of Anopheles mosquitoes identified in this study was clearly separated into clades, supported by perfect bootstrap values (100%) (Figure 2). The An. minimus clade was sister to the An. aconitus clade, with the An. jamesii clade and the An. epiroticus clade far away, respectively. The An. barbirostris A3 and An. baimaii clades, which were sister clades, had the most distant relationships with other species. Sister clades of the An. maculatus group were positioned between sister clades of An. barbirostris A3 and An. baimaii, and the An. epiroticus clade. In addition, the phylogenetic analysis based on COI sequences indicated that the An. minimus and An. epiroticus clades were split into two distinct subclades and the An. jamesii clade was split into three distinct subclades. Genetic Diversity One hundred and twelve sequences were used to estimate the genetic diversity (the single sequences of An. aconitus and An. sawadwongporni were excluded). The nucleotide and haplotype diversity values of six Anopheles species are depicted in Table 4. Haplotype Relationships The frequencies of and relationships between 112 haplotypes of An. baimaii, An. barbirostris A3, An. epiroticus, An. jamesii, An. maculatus, and An. minimus identified in Ranong based on COI sequences are shown in median-joining haplotype networks (Figure 3). The network analysis of An. baimaii revealed that H1 was the central haplotype that was highly connected to haplotype lines, and was the only one found in all localities of Ranong (northern, central, and southern parts). The haplotype network of An. minimus showed two distinct genetic lineages, A and B, based on mutation steps on the haplotype lines, similar to the An. epiroticus network, which showed two lineages. The central haplotype of An. minimus and An. epiroticus could not be identified because their frequencies were not clearly different. The haplotype network of An. jamesii showed that H8 was the most common haplotype and H1 was a shared haplotype between the northern and southern populations. Based on mutation steps, three genetic lineages of An. jamesii were identified. The haplotype networks of An. barbirostris A3 showed that H1 was the most frequent haplotype, and all haplotypes were connected in a straight line, whereas the An. maculatus network showed that H2 was the most frequent haplotype and H3 was the shared haplotype, including samples from the central and southern parts. The network analysis of An. baimaii revealed that H1 was the central haplotype that was highly connected to haplotype lines, and was the only one found in all localities of Ranong (northern, central, and southern parts). The haplotype network of An. minimus showed two distinct genetic lineages, A and B, based on mutation steps on the haplotype lines, similar to the An. epiroticus network, which showed two lineages. . COI haplotype networks of Anopheles specimens collected from three locations in Ranong province, Southern Thailand. Anopheles species represented by only one specimen, including An. aconitus and An. sawadwongporni, were excluded from the analyses. A colored circle represents each haplotype, and the circle's size is proportional to the total sequence of each haplotype. The number of mutations is shown by the dashes along the haplotype lines; the different colored circles represent different locations in Thailand. Screening VGSC-Mutation-Mediated Knockdown Resistance One hundred and fourteen DNA sequences of the VGSC gene fragment from the Anopheles specimens were checked for screening of knockdown resistance mutations. All the sequenced specimens presented only the L1014 wild-type allele in the VGSC gene (Table 5). No kdr-resistant alleles (L1014C, L1014F, or L1014S) were found in any of the 114 Anopheles specimens screened (Figure 4, Table 5). Table 5. Screening for kdr mutations in the voltage-gated sodium channel (VGSC) gene in specimens of eight Anopheles species obtained in this study. The central haplotype of An. minimus and An. epiroticus could not be identified because their frequencies were not clearly different. The haplotype network of An. jamesii showed that H8 was the most common haplotype and H1 was a shared haplotype between the northern and southern populations. Based on mutation steps, three genetic lineages of An. jamesii were identified. The haplotype networks of An. barbirostris A3 showed that H1 was the most frequent haplotype, and all haplotypes were connected in a straight line, whereas the An. maculatus network showed that H2 was the most frequent haplotype and H3 was the shared haplotype, including samples from the central and southern parts. Screening VGSC-Mutation-Mediated Knockdown Resistance One hundred and fourteen DNA sequences of the VGSC gene fragment from the Anopheles specimens were checked for screening of knockdown resistance mutations. All the sequenced specimens presented only the L1014 wild-type allele in the VGSC gene (Table 5). No kdr-resistant alleles (L1014C, L1014F, or L1014S) were found in any of the 114 Anopheles specimens screened (Figure 4, Table 5). Discussion In this study, specimens of eight species of Anopheles mosquitoes collected from Ranong province, which is Thailand's most P. knowlesi-endemic area, were subjected to species confirmation using molecular methods; the species included An. aconitus, An. baimaii, An. barbirostris A3, An. epiroticus, An. jamesii, An. maculatus s.s., An. minimus s.s., and An. sawadwongporni. In the northern part of Ranong, a total of four Anopheles species were found: An. baimaii, An. jamesii, An. sawadwongporni, and An. minimus s.s. as the dominant species. In the central area, An. baimaii and An. maculatus s.s. were found to be the dominant species. Most An. minimus mosquitoes live in forest edge areas, so they are common in the northern part of Ranong where there are many forest edge areas. Meanwhile, the Hat Som Paen reservoir area, which consists of densely forested high mountains in the central part of Ranong, is a suitable habitat for An. baimaii. We also found An. maculatus in the reservoir area, which is likely their habitat, although previous reports indicated that they were predominantly distributed along the edge of the forest [15,16]. In the southern part of Ranong, An. epiroticus is the dominant species because their habitat is coastal areas. Unfortunately, no Anopheles mosquitoes in this survey were infected with Plasmodium parasites. However, some species of Anopheles mosquitoes require special entomological surveillance, based on previous reports of P. knowlesi infections in other countries. Several previous studies in Malaysia have reported that Anopheles mosquitoes in the Leucosphyrus group are important vectors of P. knowlesi [43][44][45]. Anopheles baimaii (previously known as An. dirus species D) is a species member in the Dirus complex and belongs to the Leucosphyrus group [46]. This Anopheles species is considered the primary vector of human malaria in Thailand [47]. Our study results indicated that they are distributed in forested areas throughout Ranong. In addition, An. nemophilous, belonging to the Leucosphyrus group, has also been reported in Ranong province [48]. A previous study reported P. knowlesi infections in An. sundaicus s.l. in Katchal Island, India [14]. Anopheles epiroticus (previously known as An. sundaicus species A) is a common Anopheles species found near coastal areas in Ranong and other provinces of Thailand [49,50]. This Anopheles species belongs to the Sundaicus complex and is considered Thailand's secondary vector of malaria [48,51]. Anopheles barbirostris species A3 is a cryptic species in the Barbirostris complex belonging to the Barbirostris subgroup [52]. In Thailand, this mosquito species has previously only been reported in Kanchanaburi province [52]. Our study is the first to demonstrate the additional distribution of An. barbirostris A3 in Thailand. However, An. barbirostris A3 is another species that should not be overlooked because An. donaldi, a member species in the Barbirostris subgroup, has been reported to carry P. knowlesi infections in Lawas, Northern Sarawak, Malaysian Borneo [9]. In addition, investigations of blood meal sources of malaria vector mosquitoes using specific PCR assays should be continued in the future to determine their anopheline anthropophilic, zoophilic, or zoo-anthropophilic origin. The host preferences of Anopheles species are very important pieces of information in evaluating their ability to transmit simian malaria to humans [53]. The genetic distances between the eight Anopheles taxa based on 114 COI sequences showed that the maximum intra-and minimum interspecific genetic values were nonoverlapping, indicating the existence of a distinct barcode gap. The presence of a barcoding gap confirms the success of DNA barcoding for species identification [54]. Recently, Chaiphongpachara et al. [35] succeeded in identifying several mosquito species in Thailand based on nucleotide differences in the COI gene, except for An. dirus and An. baimaii. Our study results provide supporting evidence that DNA barcoding based on COI can be used to identify mosquito species. However, other DNA markers, such as ITS2, must be used for species identification in cases where Anopheles mosquitoes in the Dirus complex are found [55]. The nucleotide diversity (π) and nucleotide diversity (Hd) are important genetic indicators used to measure genetic diversity among populations. The nucleotide diversity values of An. baimaii, An. barbirostris A3, An. epiroticus, An. jamesii, An. maculatus, and An. minimus in Ranong were lower than the haplotype diversity values, indicating a recent Anopheles population expansion to a small effective population size after a bottleneck [56]. This demographic event occurred long enough ago for the haplotypes to increase through mutation, but not enough for the accumulation of large sequence differences [57]. Furthermore, a high level of haplotype diversity results from a large population and different environments and living habits suitable for their rapid development in nature [58]. Our results showed that genetic diversity values were similar in some species and different in others, for multifactorial reasons. Mosquito genetic diversity has both internal and external causes [59]. Internal causes of genetic diversity are genetic mutations or changes, whereas external factors are strongly related to the ecological environment of the mosquitoes [59]. Genetic diversity is an important factor that allows natural populations to adapt to and survive long-term changes or adverse environmental conditions [60]. Ranong is one of the provinces in southern Thailand with unique ecological features. The area is covered by mountains and fertile forests, and is adjacent to the Andaman Sea. It is also the wettest province in Thailand. It has been previously reported that Anopheles mosquito populations can swiftly adapt to alterations in environmental conditions, which may impact the genetic diversity within species at the population level and their gene flow [56,61]. However, a limitation of our study was the assessment of Anopheles vectors in only one endemic area, which provided insufficient data for this answer. The Fu's Fs and Tajima's D values of An. baimaii, An. barbirostris A3, An. epiroticus, An. jamesii, and An. maculatus in Ranong all showed negative values, supporting population size expansion in Ranong. If a population is selectively neutral and at equilibrium between genetic drift and selectively neutral mutation, the Tajima's D value is expected to be zero. Positive Tajima's D values indicate a sudden decrease in population size and/or balancing selection, whereas negative Tajima's D values indicate population size expansion after a recent bottleneck or mutational selection [41]. Our results were consistent with the population structure of An. baimaii in Thailand, indicating that the population is expanding [56]. The ML phylogenetic tree and haplotype network results revealed two distinct genetic lineages, A and B, of An. minimus and An. epiroticus in Ranong. Recently, Bunmee et al. [56] reported the existence of A and B lineages for An. minimus s.s. in Thailand, which agrees with our research findings. In many of Thailand's malaria transmission areas, such as Tak, Surat Thani, Yala, Chanthaburi, and Trat, there are two lineages of An. minimus, which are often found together [56]. However, it is unclear whether the two lineages have the potential to transmit malaria or other different behaviors. In addition, our study is the first to reveal two distinct genetic lineages of An. epiroticus based on the COI gene, which shows genetic variation and local adaptation. Syafruddinid et al. [62] explained that the COI gene is suitable for assessing genetic variation within populations of An. epiroticus because mtDNA has a high mutation rate. However, this gene cannot be used as a molecular marker to differentiate between An. epiroticus and its other sibling members [62]. Although three genetic lineages of An. jamesii were identified, only one group had many samples, whereas the other two groups had only one member each. Consequently, future genetic studies on this species should be conducted. The early detection and surveillance of VGSC-mutation-mediated knockdown resistance (kdr) in Anopheles populations can provide entomological data on the causes of pyrethroid resistance in insects to inform the development of strategies to control malaria vectors [63]. The present study showed no presence of kdr mutation in the VGSC gene among Anopheles mosquitoes from Thailand's most P. knowlesi-endemic area. This result is similar to those of previous Anopheles investigations in Ubon Ratchathani province, northeastern Thailand [64]. However, this study is limited by a lack of information on the susceptibility of the mosquito samples tested. Therefore, we do not know the true state of insecticide resistance in these Anopheles populations. Further entomological investigations into the susceptibility of adult mosquito vectors to insecticides are required. Conclusions Our study demonstrated the genetic diversity of Anopheles mosquitoes in Thailand's most P. knowlesi-endemic area. Our genetic diversity analysis will contribute to a more comprehensive genetic profile of Anopheles vectors in Thailand. In addition, our attempts to detect P. knowlesi infection in Anopheles mosquitoes did not reveal infected specimens. However, three Anopheles species, including An. baimaii, An. barbirostris A3, and An. epiroticus, should be kept under special surveillance as P. knowlesi infections have been found in these species in other countries. This entomological information could lead to the selection of appropriate methods for controlling these Anopheles populations, such as insecticide-treated nets (ITNs) and indoor residual spraying (IRS), in order to control the spread of monkey malaria to humans in Thailand's most P. knowlesi-endemic area. In addition, educating the population about vector breeding sites and strategies for protection against Anopheles mosquitoes, such as applying mosquito repellent or wearing protective clothes when entering forests where monkeys reside, are crucial ways to help reduce the incidence of knowlesi malaria.
7,295.4
2022-12-01T00:00:00.000
[ "Environmental Science", "Biology", "Medicine" ]
Assessing and Enhancing Movement Quality Using Wearables and Consumer Technologies: Thematic Analysis of Expert Perspectives Background: Improvements in movement quality (ie, how well an individual moves) facilitate increases in movement quantity, subsequently improving general health and quality of life. Wearable technology offers a convenient, affordable means of measuring and assessing movement quality for the general population, while technology more broadly can provide constructive feedback through various modalities. Considering the perspectives of professionals involved in the development and implementation of technology helps translate user needs into effective strategies for the optimal application of consumer technologies to enhance movement quality. Objective: This study aimed to obtain the opinions of wearable technology experts regarding the use of wearable devices to measure movement quality and provide feedback. A secondary objective was to determine potential strategies for integrating preferred assessment and feedback characteristics into a technology-based movement quality intervention for the general, recreationally active population. Methods: Semistructured interviews were conducted with 12 participants (age: mean 42, SD 9 years; 5 males) between August and September 2022 using a predetermined interview schedule. Participants were categorized based on their professional roles: commercial (n=4) and research and development (R&D; n=8). All participants had experience in the development or application of wearable technology for sports, exercise, and wellness. The verbatim interview transcripts were analyzed using reflexive thematic analysis in QSR NVivo (release 1.7), resulting in the identification of overarching themes and subthemes. Results: Three main themes were generated as follows: (1) “Grab and Go,” (2) “Adjust and Adapt,” and (3) “Visualize and Feedback.” Participants emphasized the importance of convenience to enhance user engagement when using wearables to collect movement Introduction Wearable technology is ubiquitous in the modern era, enabling behavioral and physiological tracking of numerous variables, including the 24-hour movement profile [1,2].However, commercially available wearable devices have almost exclusively focused on the quantification of such measurements, largely overlooking the emerging capabilities of many devices to assess, and potentially improve, human movement quality [1][2][3].While there is a vast array of evidence supporting the need to be physically active for overall health and well-being [4][5][6], research has also demonstrated the physiological and cognitive benefits associated with improved motor competence [7].Indeed, better movement quality has a catalytic effect on facilitating lifelong physically active lifestyles [7,8]. In elite sports, movement quality is widely analyzed as athletes strive to improve performance [9].Yet, there is a dearth of opportunities for the wider population, encompassing youths through to older adults, to capitalize on either the health or performance benefits of better movement quality.Indeed, the most common means of assessing movement quality, such as optical motion capture [10], the use of depth cameras [11,12], or the employment of a coach [13], have associated practical and financial limitations [10,11,13].However, wearable technology may help to overcome these barriers [13].Wearable devices, equipped with motion-detecting sensors such as accelerometers, have the capability to capture movement data.These components can be leveraged to assist in assessing and improving an individual's movement quality during specific activities [14].As the capabilities of wearable technology continue to improve, it is important to identify how such devices can best be implemented in the general, nonelite, population.Furthermore, it is important not only to facilitate user-friendly means of collecting data but also to optimally deliver specific and contextualized feedback, using smart technology applications [14], to maximize the improvements in movement quality [2,15]. A growing body of scientific literature has shown the capabilities of wearable devices to detect and classify movement discrepancies.Specifically, wearable devices have been frequently validated for use in the assessment of movement quality in clinical contexts [16][17][18], as well as for specific sports and exercises [19][20][21][22][23][24].Kianifar et al [16], for example, demonstrated that a binary machine-learning algorithm could distinguish between "good" and "poor" repetitions of a single-leg squat with 90% accuracy using a single wearable device worn on the ankle, and 96% accuracy using 3 devices (ankle, thigh, and lower back).Moreover, O'Reilly et al [22][23][24] classified specific movement discrepancies using a network of 5 sensors, with accuracies of 78%, 80%, and 70% for the barbell deadlift, body weight squat, and body weight lunge, respectively.However, the delivery of feedback is seldom considered within the scope of such validation studies [17,[22][23][24][25] or is expected to be provided by a trained clinician [16]. The perspectives of influential figures, such as parents [26], teachers [27], and health professionals [28,29] have been explored in the context of wearable-based physical activity tracking.More prevalently, however, user perceptions around the practicalities and limitations of wearable activity trackers have been researched [30][31][32][33].Collectively, these studies highlight several common barriers to their use: device unreliability and an associated lack of trust, a prerequisite for technological literacy, and age-related usability challenges for both young children and older adults due to physical constraints.Additionally, user opinions around physical activity feedback have been explored extensively, specifically in relation to how much physical activity an individual has performed [27,34,35], which is important to maximize user engagement [35]. Consumer perspectives play a valuable role in identifying user needs, particularly in the preliminary stages of product development, recognizing that designers and developers may overlook requirements that are specific to the user demographic [36].However, relying solely on consumer insights is limiting, as users are typically constrained by their current mental models; they lack the ability to foresee disruptive context changes, such as technological advancements, and opportunities for novel innovations [37].Moreover, it is reasonable to assume that consumers lack the comprehensive understanding of critical aspects of product development that designers and developers possess, particularly regarding technological and manufacturing capabilities.Consequently, it is beneficial to implement synergistic design strategies that use the insights of consumers, along with the expertise, insights, and interpretive qualities of developers and individuals in consumer-facing positions [37,38].Yet, to date, there is a notable gap in the literature regarding the perceptions of individuals who have worked directly in the development and application of wearable technology.Furthermore, there is also a dearth of formative research that considers the practicalities of wearables and the provision of technology-based feedback specifically for assessing the quality of human movement (ie, how well individual moves) among the general population.Therefore, the aim of this study was to ascertain the opinions of experts with combined experience in the development and application of wearable technology, with a focus on its application for measuring movement quality and providing feedback.Additionally, it sought to identify potential strategies for incorporating the participants' preferred assessment and feedback characteristics into a technology-based intervention for the general, recreationally active population.This study represents the first of two, with the second focusing on consumer opinions, seeking to provide a comprehensive, holistic perspective on the use of consumer technologies for assessing and enhancing movement quality. Data Collection and Analysis A total of 12 adults (age: mean 42, SD 9 years; 5 males) were recruited via an intraorganizational email network.All participants had extensive experience in the development or application of technology for sports, exercise, and wellness.Purposive sampling was used to ensure a diverse and balanced range of specialists from both commercial and product development roles were selected from across an organization.Although data saturation is commonly used to ensure an adequate sample size within qualitative research [39], it was not conducive to the analytical approach used in this study [40].Therefore, the pragmatic guiding concept of information power was instead applied to appraise and confirm the adequacy of the final sample size based on the focused study aims, a dense intrainstitutional sample, and theory-guided investigation methods [41]. Semistructured interviews were conducted primarily in-person at the participants' workplace by the lead researcher (TAS), with 1 interview conducted via Microsoft Teams (version 1.5.00.21463).All interviews took place between August 18, 2022, and September 8, 2022.For each interview, only TAS and the interviewee were present.The interviews lasted 38 (SD 16) minutes and used a predetermined interview schedule that enabled follow-up questions and prompts (Multimedia Appendix 1).Participants were also presented with an information sheet outlining the overarching rationale for the study and their participation (Multimedia Appendix 2).All interviews were conducted in English with fluent, nonnative English speakers.The initial questions aimed to stimulate freethinking.However, as the discussion progressed, the questions became more targeted and contextualized.When discussing visual feedback, examples were used as prompts to direct participants and aid understanding (Figures S1-S3 in Multimedia Appendix 1).The examples were selected to indicate ways in which visualizations have been previously used, incorporating a combination of technicality, simplicity, and an array of designs.The prompts provided context and offered an opportunity for reflection, while enabling participants to express their opinions relating to certain features and characteristics of visualizations.The interview schedule was compiled by TAS and MAM, and subsequently approved by the wider research team.Interviews were audio recorded using a Philips DVT3400 Voice Tracer (Koninklijke Philips N.V.), and subsequently transcribed verbatim (Multimedia Appendix 3).To ensure anonymity, the characteristics of the 12 participants have been intentionally restricted.However, participants could be classified into 2 broader categories based on their job role: commercial (n=4), or research and development (R&D; n=8).Consequently, participants with a commercial function have been assigned identification codes C1-C4, while those in R&D have been designated as RD1-RD8. The interviews were analyzed by using the 6-stage reflexive thematic analysis (RTA) process developed by Braun and Clark [42][43][44][45].This method was used to identify repeated patterns in the data, organized around particular themes [43].The use of RTA is conducive to exploring deeper, underlying meanings within the data, rather than superficially identifying, and reporting, what participants said [43].Additionally, RTA offered a structure to which the analysis could adhere while allowing themes to develop organically without undue restriction [43].Initial familiarization with the data involved relistening to the audio recordings of each interview while taking notes and compiling a summary report for each of the 12 interviews [43][44][45], comparable with the approach taken by Byrne [46].The audio transcriptions were uploaded to NVivo (release 1.7; QSR International), where TAS generated the initial codes before undertaking a period of refinement and organization.Themes and subthemes were then iteratively created, assisted by thematic mapping (Multimedia Appendix 4).The initial codes were developed inductively without any predetermined coding framework, though, as is typical in RTA, a degree of deductive analysis was required to ensure that the included codes and themes were related to the overarching research direction [42,44,46].Notably, by using both inductive and deductive methods, the research questions were iteratively refined throughout the 6-stages to ensure relevance to the intended application.Both semantic and latent interpretations of the data were investigated [43,44,46]. Philosophical and Theoretical Underpinnings The study provided a platform for wearable technology experts to share their professional knowledge and personal experiences, providing valuable information related to improving movement quality and allowing consumer demands and future development opportunities to be explored.Questions regarding movement-quality feedback were theoretically underpinned by Schmidt and Wrisberg [15] and Fleming and Mills [47].That is, recognizing individuals may respond better to certain sensory modalities than others [47], and that motor skill-learning and -retention can be maximized through optimal delivery of feedback, though this is both individual and contextual [15].However, participants were afforded the freedom to think creatively with minimal restrictions when discussing data collection. Researcher Positionality Reflexivity implies that there is an inherent, yet valued, subjective component to RTA [42].As such, it is important to understand how researcher positionality may have influenced the findings of the study [42,48].The lead researcher (TAS) is a White British male who is a current PhD candidate and holds a BEng in Mechanical Engineering and an MSc by Research in Sport and Exercise Science.TAS also has extensive industrial experience, albeit in industries largely unrelated to wearable technology.As such, these experiences were influential during the analysis, where considerations around potential applications and future developments were, at times, central to the thought process.Prior to commencing this study, TAS was already familiar with wearable technology, both as a researcher and as a consumer.Further, TAS had briefly met with some of the participants ahead of the interviews. Methodological Rigor To increase credibility and to ensure methodological rigor, the data analysis process was discussed with KAM, and the generated themes were checked to ensure they appropriately represented the data [29].Additionally, a "critical friend" used the checklist devised by Braun and Clark [45] to evaluate the quality of the thematic analysis [48].Any unsatisfactory responses to the checklist items were highlighted and feedback provided, after which amendments were made until all questions in the checklist were adequately answered.The COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist [49], an established method of ensuring explicit and comprehensive reporting qualitative research, was also used (Multimedia Appendix 5) Ethical Considerations This study was approved by Swansea University College of Engineering Research Ethics and Governance Committee (TS_01-07-22).All study data were anonymized before analysis and no other personal or private information was included in the data set to protect the privacy and confidentiality of the study participants.All participants provided written informed consent before participation. User Interest User interest was identified as a key determinant of wearable technology use.The experts explained the need to capture user interest through functional hardware but also highlighted the need for a device that "looks cool, nice" (RD2), or even perhaps discrete devices that are "super comfortable" (RD2) and that "others wouldn't be able to see so clearly that you're wearing some kind of device" (RD6). Participants also described the importance of users being aware of the need for movement-quality feedback and the associated benefits of good movement, proposing that technology use could promote this.The interviewees suggested ways to encourage the implementation of technology in the first instance, highlighting that "if they will get really precious information" (RD5) and "if the information is good enough" (RD3), users will be more receptive to wearing devices and receiving feedback.However, the experts largely focused on the various benefits of the technology, especially for "the beginners (...) or those who haven't been physically active" (C3), who may lack understanding of good movement quality.It was suggested that such individuals may have "a common fear, in a way, that [they] are doing this movement wrong, and [they] might injure [themselves]" (RD8), hence streamlined feedback delivery would be welcomed. Variety was proposed as a means of retaining user interest, with 1 participant who had used wearable technology personally stating that they "got bored of [visualizations] because they never changed" (RD2).It was also felt that the implementation of a progression-based system would be beneficial when learning to move proficiently. I think it could be good to have the basics."This is the basics", and then when you've, let's say, you've done that exercise on five different occasions, and then you can make the assumption that "Okay, now you're familiar with these three things", that it tells you, then you could move on to the next three things. [C4] However, it is important to recognize that progression may fluctuate and that regression in skill proficiency may also occur.Hence, an adaptable system that accommodates both advancements and potential regressions would likely be more appropriate.Additionally, where feedback was considered for use in movement skill learning, it was proposed that the learning phase may have a finite lifespan and that the feedback would no longer be required once the skill mastery had been achieved. So, using the app like that, with the pictures, with the squat, and things like that, it's kind of hard to imagine that somebody uses it for a long time. [C3] Participants widely spoke of the need to avoid information overload.Too much information was generally viewed as a barrier to technology use.When presented with an example of a technical visualization (Figure S1 in Multimedia Appendix 1), 1 interviewee stated, "I just can't get a hold of it in any way, and I lost interest, like, in a second" (C1), whereas another proposed, "for the [sic] normal people, the simpler, the better" (C2), despite acknowledging that a niche demographic with greater experience or interest in the data may "want to dig into the details" (C2).Participants repeatedly advocated for simplicity, especially "if you are starting [exercise] from scratch" (C2), as it was speculated that it would make skill learning easier.However, 1 R&D participant also recommended simplicity for more advanced athletes to assist with the processing of information, where the feedback might be viewed as useful rather than a distraction, or something that could be perceived as intimidating. The usability has to be, even if I'm a professional (... Convenience The convenience of the applied technology and the delivery of feedback were highlighted as essential.Generally, the experts expressed a preference for a more streamlined process without unnecessary complexities or time delays between the data collection and the delivery of movement feedback.One R&D participant spoke of their own personal experiences to provide context around the benefits of convenient movement-quality feedback. I don't want to go on the classes. I'm just too lazy to join the one-month course or something like that. If somebody could offer me a device for every now and then, put it on and then giving me feedback of my swimming technique. How to make it more effortless. That would be interesting. [RD2] To accompany the desire for simplicity, the wearable technology experts widely alluded to consumer appetite for a convenient solution, promoting the use of wearable technology for data collection, as it "collects everything automatically, so you don't need to collect it by yourself" (RD1) and it is "easy because it's always with you" (RD5).However, when discussing the use of technology for the provision of feedback, opinions were mixed.Smartphones were broadly viewed as the preferred means of receiving feedback, "because everybody has one, and it's always there" (RD2), and "computers are too big" (RD4).Smartphones are more prevalent in modern society and have a wide range of capabilities, whereas other larger devices are too impractical.Yet, some participants highlighted the negative aspects of smartphones for feedback.For example, a frequent gym-goer stated "During my exercise, I don't want to gaze [at] it all the time (...) so mobile phone, no thank you." (RD7) Limitations and opportunities were simultaneously identified for other technologies too, both for collecting data and providing feedback.Wrist units were perceived as convenient for most daily activities and many sport and exercise applications, especially as they can be used to collect data and subsequently deliver feedback.But the wrist was also viewed as "a very competitive spot" (RD2) that may already be being used for other devices or accessories.Additionally, participants expressed concern that a single wearable might be "guessing" (RD1) when XSL • FO RenderX used on its own for data collection, and may also be inappropriate for some sports, as it may be "harder to move, or it might even ruin your technique" (RD3).Similarly, a superficial device could be a problem "like, if you have contact sports" (RD8), as they could increase injury risk. There appeared to be a trend whereby convenience could be sacrificed to an extent, albeit reluctantly, if greater benefits could be gleaned from, for example, a more comprehensive network of wearable devices.For instance, fewer sensors intuitively take less time to put on, but users may be required to wear more sensors to enhance the quality of the data and feedback.This may be easier to facilitate if user demands can be accommodated, such as if the sensors "allow some tolerance of where they are" (RD8) and "they don't really affect your performance in a bad way" (RD3).Furthermore, by accommodating accessible and favorable sensor locations, ensuring "that it's easy to wear" (RD7), they do not "require quite a lot of time to strap on" (RD8), and even making devices multifunctional, the experts appeared to suggest that much of the convenience could be retained. Emotional Reactions This subtheme primarily centered around emotional reactions to feedback, rather than the data collection process.Participant C3 highlighted the skepticism some users may have when receiving feedback, possibly from a lack of familiarity with technology, but also due to a lack of trust. For some people, I'm quite sure that it would be kind of hard to believe that, "Is this really working? Or is this good for me? And how do they know that I should do this?" and convincing them. [C3] The experts recognized that critiquing an individual's movement quality may be a sensitive topic.One participant expressed that users often reside in what was described as an "an ideal self-bubble" (RD4), where individual beliefs differ greatly from reality.The participant indicated that individuals with a preconceived notion that they already understand good movement would react negatively to criticism.The interviewees indicated that product users would rather "trust the feeling [they] have in [their] body" (C1) and if the feedback was critical, they may perceive the feedback as erroneous, suggesting that they "wouldn't have [it] again" (C1).It was also proposed that, when receiving feedback, some users could react badly to seeing themselves performing a movement. Some people might find it, I don't know, even embarrassing. Not everyone likes to see themselves, at least if they are not good at what they are doing. [RD1] Participants highlighted that a large portion of the population is technology-averse and that many of the population have a reluctance to engage with modern devices, particularly for health-related applications.It was felt that some people would find it "kind of hard to believe" (C3) what they are being told by a device and that they may also "be nervous about where the information is used" (RD2).However, if delivered effectively and positively, "feedback is rewarding" and could encourage users to persist with the activity.It was believed that feedback "has to be really constructive" (C3), and supportive of progression, such that user engagement is retained for improving their movement quality, but also that they are not deterred from using the technology for other capabilities.Participants advised that feedback needs to inform consumers of any movement discrepancies, "but in order to give a positive impression, it would also need to have something encouraging" (C2).One individual cautioned of the effects of negatively perceived feedback. Accommodating Needs When discussing feedback, those interviewed provided conflicting views, particularly with regard to detail.Some experts suggested that beginners with less experience and understanding of good movement "might find it really useful to get some sort of explanation" (RD3) to help them understand how to use the feedback, whereas others indicated that too much detail could lead to confusion. Those who we are trying to get physically active more, or they are just beginning, not too much information, because, well, I think they could get lost in there and it's kind of hard to understand it.[C3] However, it is plausible that additional information may have been falsely associated with complexity, and that experienced individuals might actually "want to dig into the details" (C2).Notably, the interviewed experts collectively recognized that individuals would have different requirements and desires, proposing that giving the user some control over the feedback could be effective, enabling them to "check the key points" (RD3) during or immediately after exercise, but providing the opportunity to "read the details if [they] have time" (RD3).Some participants proposed that "background information about the person" (C3) could be integrated to offer users "the opportunity to give feedback to the system" (RD4), an approach that could enhance the insights provided by the wearable-based system.This may also enable increased relevance and accuracy, consequently tailoring the system to better accommodate individual requirements. Device Capabilities and Suitability Despite user demands, it is important to acknowledge the capability and accessibility limitations of existing technologies.As wearable technology is still a developing area, participants recognized the opportunities for improvement, such as being "more precise than they are nowadays" (C3), but also highlighted some existing restrictions that could limit how effectively wearable technology could be used for collecting movement data. I see a lot of, especially in the hardware technology side, a lot of challenges that we still need to overcome.And when I talk about these, I mean, things like the sensor and sizing, the price of it, the communication of the sensor, or the sensors between each other.[C3] Additionally, participants spoke of device suitability and effectiveness in certain environments and contexts, acknowledging, "it depends on the exercise type" (C3).Similarly, different feedback modalities were suggested to be better in some settings than others.For instance, "audio would be a bit difficult in the gym (...) but audio would be nice on a run" (C3).Similarly, visual feedback would be inappropriate for running, yet more suitable in a gym or at home. Visualizations Visual feedback, which may consist of images, videos, or animations, was thought to be the most effective way "to deliver a lot of information" (C2) with the greatest efficiency and ease of understanding "instead of having to read stuff" (C2).Interviewees generally endorsed the use of a digital representation of the users, such as an avatar, as "you immediately get what the information is about" (C2), similar to that used in Figure S1 in Multimedia Appendix 1. Instead of something abstract that is "not you (...) it's something totally different" (C2), the experts largely favored more direct, relatable feedback with very little analysis required. Participants offered ways in which feedback could be optimized, perhaps through the inclusion of accompanying visual standards to aim for, such as seeing "the perfect image there, in a shadow" (RD3).An R&D participant proposed that an image of the "perfect" movement could be presented along with the user's movement "maybe side-by-side, something like that" (RD6).However, some individuals emphasized the importance of appropriate visualization design, evident through their own misinterpretations of the examples presented.One participant stated, "I don't think it's very clear, at least for me" (C3), highlighting the need for clarity in the visualizations. Something Extra The wearable technology experts appeared to favor visualizations.However, the consensus was that visualizations would be best accompanied by an additional form of feedback to aid interpretation, as "visual on its own is maybe not enough" (C4).It was thought that visualizations alone would be able to show good or poor movements, but it was advocated that "it would need text and/or audio, also" (C4) to provide some sort of explanation.The interviewees generally felt that visualizations can depict the movements well, but consumers would likely be left without instruction to rectify or enhance any movement discrepancies if there was an absence of an additional feedback modality.It was also noted that, in some settings, visualizations may not be the most appropriate form of feedback, and therefore it could be beneficial to have alternatives. Principal Findings This study sought the opinions of experts with combined experience in the development and application of wearable technology, with emphasis on its application for measuring movement quality and delivering feedback.Overall, three themes were generated as follows: (1) "Grab and Go," (2) "Adjust and Adapt," and (3) "Visualize and Feedback."Despite some ambivalence surrounding device preferences, there was a collective agreement among participants on the importance and effectiveness of wearable devices to provide real-time, detailed movement-quality feedback.This uniform agreement emphasized the potential of wearables to improve user awareness and enhance physical activity. This study highlighted numerous barriers, as well as facilitators, to using wearables and consumer technologies to assess and improve movement quality.Initial accessibility, coupled with the retention of user interest, was recognized as essential to encourage users to improve their movement quality.Congruent with prior research [31], the interviewees emphasized the need to capture user interest through effective wearable device design and capability, and then preserve interest through ease of application, variety, engaging feedback, and providing scope for progression.Notably, however, this study is framed around the assessment and improvement of movement quality, inherently dependent on process-focused "Knowledge of Performance" feedback, in contrast to the outcome-based "Knowledge of Results" feedback frequently used when evaluating movement quantity [15,50].Further, motor-skill refinement is arguably a more iterative process than increasing physical activity volume, given the requirement for more nuanced and continual refinements and repeated exposure [51].Consequently, this study offers a novel perspective on feedback in assessing movement quality, focusing on the opinions of wearable technology experts rather than user perspectives. To bolster efficacy, there is potential to apply methods such as machine learning, by using captured movement data and user feedback to enable algorithmic updates and facilitate the ongoing learning and improvement of the user's movement quality [22].This aligns with the appetite for personalization during movement-quality assessments identified in this study.Personalization would enable the user to have control over feedback modalities, feedback timing, and the level of detail they wish to receive, while potentially contributing to increased assessment accuracy.Additionally, congruent with previous research [29,31,52,53], this study highlights the potential negative impact of information overload on user interest, underscoring the advantages of individualization: users have control of how much and when feedback is provided.Indeed, the wearable experts advocated for streamlined feedback, delivered in manageable and interpretable quantities, while XSL • FO RenderX minimizing overall information provided and catering to the individual's requirements.This concurs with Orphanides and Nam [54], where the need for flexible feedback methods was highlighted given the difficulties for specific populations, such as young children and older adults, to interact with modern technologies. The risk of overwhelming users with excessive details may be negated through the use of glanceable displays when providing real-time feedback during certain activities [27], as indicated by the findings of this study.Specifically, research has suggested that delivering varied feedback in short snippets via glanceable displays may increase engagement and motivation to be more physically active [27,55,56].However, it is postulated that glanceable displays may be too limiting to provide adequate nuances for substantial motor-skill developments, but could be appropriate for minor technical adjustments and reinforcements in real time.The importance of ensuring that feedback is both constructive and positive to maximize user experience was emphasized by participants in this study, given the risk of negative experiences leading to device discontinuation [57].Offering constructive feedback can enhance the potential of wearable technology to deliver effective solutions for those seeking to improve their movement quality [58].Furthermore, in alignment with previous research [26], a different perspective on the use of physical activity trackers suggests that once the feedback is understood and, in the context of movement, mastery is attained, a feature may no longer be used.This alternative perspective may alleviate the pressure for sustained use of a feature, while potentially prompting the need for further progression in developing advanced motor skills.However, considering the potential for skill regression, it is crucial that features can be seamlessly reintroduced as required, helping users both regain and maintain movement skill proficiency. While each device has inherent strengths and weaknesses, it may be advantageous to provide flexibility in device selection, given the diverse opinions observed in this study.Customization is already prevalent in physical activity tracking, where users can typically modify performance outcomes (eg, step-count targets and total daily energy expenditure), as well as feedback methods [59,60].However, using wearable technology to measure and assess movement quality is still in its infancy and there are additional challenges to personalization and the provision of user-specific feedback, largely attributable to individuals' unique anthropometrics and physical limitations (eg, injuries and mobility restrictions).Of concern, as outputs become increasingly segmented and specific, measurement accuracy has been shown to proportionately decline [16,22].In accordance with previous research, this study highlighted that consumers may have difficulties trusting technology [31,52], and unreliable, invalid data would likely exacerbate this issue.However, Meyer et al [52] proposed that there is a trade-off between detailed, yet erroneous, outputs and losing user-specificity due to oversimplification.Providing relevant and specific feedback to the user can be attained without the need for excessive detail [52], and, as suggested in the findings of this study, avoiding information overload could enhance motor-skill learning by streamlining the necessary information into manageable and interpretable quantities. The unavoidable compromise between sensor quantity and the capability to effectively measure and assess holistic movements is a limitation of wearable technology [3].It is widely recognized, and expressed by participants within this study, that a network of sensors positioned around the body is generally superior for conducting movement-quality assessments for holistic movements, with provisions of greater accuracies and more insightful movement capture [3,16,[22][23][24].Indeed, within clinical environments especially, there is very little leniency in measurement accuracies [61].Hence, in such settings, there may be value in the application of a comprehensive sensor network.However, the possible need for multiple sensors may reduce practicality, a potential barrier for some.The participants of this study conveyed that users may be more tolerant of reduced practicality if the feedback was perceived as valuable and informative.Moreover, reiterating previous observations [26], participants indicated that once users achieve mastery of a movement, they might cease using certain features.It is therefore plausible to suggest that the short-term inconvenience of managing multiple sensors could also be justified by the long-term benefits in motor-skill development.Subsequently, this raises questions around the optimal balance between practical application and sufficient accuracy for general population consumers, considerations which align with recent pedagogical research for the assessment of motor development in youths [62].Further, the cost implications of a multisensor approach should not be overlooked.Although users may tolerate temporary discomfort for improved movement-quality assessments using wearables, the perceived value and affordability of using multiple sensors, particularly over a relatively short period, could impact their willingness to adopt such technologies [13]. Participants discussed numerous feedback methods (ie, text, audio, visual, and haptic), providing contextual examples of when they may best be implemented for assessing movement quality.However, a visual representation of the user was widely endorsed by the experts in lieu of something more abstract or minimalistic, particularly when learning a new movement.In the context of activity quantification, overly minimalistic visualizations have been shown to be less favorable [52], and this trend appears to also extend into movement-quality feedback.While abstract visualizations may be effective for learning both simple and complex motor skills, they may be perceived as boring after prolonged use, and challenging to apply to complex multidimensional movements in 3D space [63].Participants also encouraged the use of multimodal feedback, such that the combination of methods could complement any discrepancies that would exist in unimodal feedback.Specifically, it was proposed that supplementing a visual representation of the user with clear and concise text or instructional audio would aid understanding.Indeed, previous research strongly supports this, by indicating that audiovisual feedback enhances motor learning more effectively for a single task compared with single-modality feedback [50,63].Interestingly, however, participants in this study suggested that some users may dislike seeing themselves on a screen, especially if their movement is being critiqued.This sentiment presents a notable paradox; those who would benefit from corrective feedback the most may experience discomfort through self-observation.Therefore, it is surmised, based on the present results, that an avatar may be a suitable compromise that is both engaging and understandable, yet offers sensitivity, and could help avoid potential psychological discomfort.Specifically, the use of avatars has been shown to be effective in supporting the learning of gross motor skills [64,65], and increasing physical activity levels [65]. Strengths and Limitations There are numerous strengths associated with this study, including the recruitment of participants with extensive experience in wearable technology use in sports, exercise, and wellness.The sample encompassed both technical expertise and insights from those who interact with users.Furthermore, the study used a thorough and rigorous data analysis process centered around RTA [42][43][44][45], a well-established and widely recognized qualitative research method.Nonetheless, the study is not without limitations.The sample size was identified to be adequate using the concept of information power in lieu of data saturation, which, according to Braun and Clark [45], is too subjective to find a definitive point of saturation when conducting RTA [40].However, the interviews were conducted with experts from a limited recruitment pool, which may have narrowed the findings due to common experiences.As such, future research incorporating a more diverse, wider-reaching array of wearable technology experts is required, while it would also be beneficial to seek insights from prospective wearable users for evaluating movement quality specifically.Moreover, it is pertinent to note that this study was conducted with a view to targeting a nonelite demographic.Therefore, future research is warranted to consider feedback mechanisms for elite athletes. Conclusions Overall, this study identified that wearable technology experts perceived convenience and simplicity as priorities for both movement data capture and feedback mechanisms.Further, due to the subjective demands of prospective users, an adaptable solution was considered preferable when implementing these findings in a practical setting.Moreover, it was advised that movement-quality interventions utilizing technology should be progressive and use visual feedback that is representative of the user, such as an avatar, supplemented with concise text or verbal instructions as part of a multimodal system.A second study will consider the opinions of prospective consumers to compare with the findings of this study and enable a comprehensive evaluation across all stakeholders.Thereafter, the combined findings from both wearable technology experts and users should be applied in a practical setting to assess their efficacy for enhancing movement quality. Figure 1 . Figure 1.Final thematic map illustrating the interrelatedness of 3 generated themes: "Grab and Go," "Adjust and Adapt," and "Visualize and Feedback."Within themes, the relationships of the identified subthemes are also presented, including directionality. ) in that moment I'm exercising and doing sports, I do not have brain cells to start analyzing what I'm doing.If I have the information when I'm on the bike, or during the physical movements.So, with one glance, I have to analyze it easily.[RD7]
8,605.8
2024-01-26T00:00:00.000
[ "Engineering", "Computer Science" ]
A New Compton Camera Imaging Model to Mitigate the Finite Spatial Resolution of Detectors and New Camera Designs for Implementation An intrinsic limitation of the accuracy that can be achieved with Compton cameras results from the inevitable fact that the detectors, which comprise the camera, cannot have infinitely-accurate spatial resolution. To mitigate this loss of accuracy, a new imaging model is proposed. The implementation of the new imaging model, however, requires new camera designs. The results of a computer simulation indicate that the new imaging model can produce reasonable images, at least when noiseless simulated data are used. In the future, more work is needed to determine if the use of the new imaging model will improve the imaging capabilities of Compton cameras despite the loss of sensitivity caused by the use of the new camera designs. Regardless of the outcome of this work, the results presented here illustrate that new models for imaging from Compton scatters are possible and motivate the development of further models that could be more advantageous than the ones already developed. Introduction Compton cameras have the potential to significantly improve diagnostic and therapeutic medicine.For instance, bismuth-213 radioimmunotherapy is actively being pursued at the present time for treating various forms of cancer , as well as HIV [31,32].Maximizing the efficacy of this treatment requires maximizing the amount of dose administered to the patient.However, the potential hazard to bone marrow, kidney, lungs and other secondary organs limits the dose that can be administered.The amount of radioactivity that can be tolerated by these secondary organs varies from patient to patient.Hence, accurately calculating the absorbed dose within these organs is essential.Producing the most accurate images possible of the bismuth in the patient will result in the most accurate calculations of absorbed dose.Of all the medical imaging techniques presently available, single-photon emission computed tomography (SPECT) will produce the best possible images of bismuth-213.However, the collimator that is used in a SPECT camera limits the energy of the photons that can be imaged.As a consequence, in comparison to images of technetium-99m (140 keV), a SPECT camera will produce relatively poor quality images of bismuth-213 because of the high energy of the photons (440 keV) that it emits.Fortunately, since collimators are not used, Compton cameras have the potential of out-performing a SPECT camera when bismuth-213 is imaged. An intrinsic limitation of the accuracy that can be achieved with Compton cameras results from the inevitable fact that the detectors, which comprise the camera, cannot have infinitely-accurate spatial resolution.The effects of the finite spatial resolution of detectors, the finite energy resolution of detectors and a phenomenon known as Doppler broadening on the image accuracy of Compton cameras have been studied [33].It was found that of these three factors, finite spatial resolution is the dominant degrading factor at a higher medical diagnostic energy level (511 keV).The new theory presented in this paper may mitigate the loss of accuracy due to the finite spatial resolution of the detectors.In addition, this theory, if fully developed (generalized), has the potential to decrease the amount of data that need to be processed and provides more flexibility in designing the detectors that comprise the camera.The new theory is based on a new imaging model.In Section 3 of this paper, this new imaging model is developed along with a method for reconstructing an integral of a distribution of radioactivity along a line using the new imaging model.New camera designs that can be used to exploit the new imaging model are presented in Section 4 of this paper.First, a camera design that can be used to produce a parallel projection of the distribution is presented.Since this parallel projection is similar to the data obtained by a conventional SPECT camera equipped with a parallel hole collimator at one positioning of the camera, three-dimensional reconstructions could be obtained by producing such projections as the Compton camera is rotated about the patient.A second design is presented that allows fan-beam projections of the distribution to be reconstructed.Producing fan-beam projections as the camera is rotated about the distribution would, of course, allow two-dimensional reconstructions to be made.The results of computer simulations to demonstrate the imaging model and inversion method developed in Section 3 are presented in Section 6.Finally, in Section 7, the advantages and disadvantages of new the imaging model and the new camera designs are discussed. Existing Compton Camera Imaging Models As originally proposed, a Compton camera consists of two parallel planar detectors [34].Ideally, the physics within the camera is as follows.As illustrated in Figure 1, a photon, which originated within the distribution, interacts, via a Compton scatter, with the nearest of the two detectors.As a result of the scatter, the photon loses some of its energy, and the direction of its trajectory changes.Then, the photon continues on in the new direction and interacts with the second detector.Assuming this camera physics, the following is done to construct data so that the distribution of radioactivity can be estimated.The location of the two points of interaction of the photon and the amount of energy lost during the scatter are measured.The photons that interact at a given location on the first detector and a given location on the second detector and that have lost a given amount of energy are tallied.This provides the number of counts in a "measurement bin". Two imaging models for Compton camera data have been proposed by other researchers.Both models assume that the number of counts in a measurement bin is proportional to an integral of the distribution over a cone.The apex of the cone is at the first point of interaction; the axis of symmetry of the cone is the line that connects the two points of interaction; and the half angle of the cone is the scatter angle calculated from the energy measurement using Compton's law [35].These imaging models were proposed in an effort to develop reconstruction methods that would produce a three-dimensional estimate of the distribution of radioactivity from a set of such integrals. The two models use different mathematical equations to express the integral over the cone.One model considered the data to be a surface integral of the distribution over a cone [36].The reader is reminded that not all integrals over a surface are surface integrals.For Φ ∈ R 3 , β ∈ S 2 and 0 < ψ < π, let the symbol S SI (Φ, β, ψ) denote the surface integral (SI) model of the distribution of radioactivity on the one sheet cone whose apex is the point: Φ, the axis of symmetry is the unit vector: β, and the half angle is: ψ.Furthermore, let f (x) denote the distribution of radioactivity at the point x for: x ∈ R 3 .Let the vectors: Φ, β, and: x be described in terms of a global coordinate system.A local coordinate system in which the "z" axis points in the direction of the vector β is used in expressing the surface integral.Using this coordinate system, the unit vector α = α(φ, ψ) is expressed in a spherical coordinate system by letting: α (cos φ sin ψ, sin φ sin ψ, cos ψ) where ψ is the angle measured from the "z" axis.A standard calculus equation for the surface integral of a cone yields: where the rotation matrix M is defined as: where β, β ⊥1 and β ⊥2 are three orthonormal column vectors in R 3 . A second model of the data has been considered [37].This model is called the integral of cone-beam line-integrals (ILI) model here.Let S ILI (Φ, β, ψ) denote the integral of cone-beam line-integrals of the distribution of radioactivity.The cone-beam line-integrals of the distribution at: Φ is defined as: where: |ν| = 1.Using: M and: α(φ, ψ) as previously defined, the integral of cone-beam line-integrals of the distribution is defined as: In a previous publication, conditions that describe the data needed to reconstruct a single line-integral from Compton camera data were developed [38].It was found there that these "completeness conditions" depended on which of the two models was assumed for the data.In particular, it was found that the completeness condition for the SI model is more demanding than the ILI condition, because the SI condition needed data from more than one apex, whereas the ILI condition needed data from just one apex.Like the ILI model, the new imaging model that will be proposed in the next section will have the same desirable property. The Development of A New Imaging Model and Its Inversion To develop the new imaging model, a new function is defined.For β ∈ S 2 and ∈ R 1 , the function F (β, ) is defined as: [39]: where: and f (β, ) is the three-dimensional Radon transform; namely, In words, F (β, ) is the Hilbert transform of the partial derivative of the three-dimensional Radon transform. The proof of Lemma 1 is given in [40].Substituting Equation ( 13) into Equation ( 11) yields: Exchanging integrals yields: The new imaging model is defined as: This model is referred to as the integral of the integral of cone-beam line-integrals (IILI) model.Substituting Equation ( 16) into Equation ( 15) yields: The above equation makes it possible to invert the IILI model to obtain the integral of the distribution along the line parallel to the vector: ϕ ⊥ that intersects Φ. Developing a geometric interpretation of the last two equations is useful.The geometric interpretation of Equation ( 17) is relatively straightforward.The integral of the distribution along the line parallel to the vector: ϕ ⊥ that intersects the apex Φ can be calculated by integrating the appropriately-weighted IILI model over all possible half-angles.To interpret the new imaging model defined in Equation ( 16), it is helpful to determine the set of cones that are involved with the equation's integral.Equation ( 16) involves the integral of conical integrals where the ILI model is used for the conical integrals.Since Φ and ψ are fixed in Equation ( 16), the cones that are being integrated over share a common apex and half-angle.Moreover, all of these cones have an axis of symmetry that is perpendicular to the vector: ϕ ⊥ .Since the integral in Equation ( 16) has a lower limit of zero and an upper limit of π, to implement this integral, the conical integrals for the previously-described set of cones for a collection of axes that span 180 degrees are needed.In other words then, Equation ( 16) involves integrating the conical integrals over the set of cones with a fixed half-angle that share a common apex Φ and have axes perpendicular to the vector: ϕ ⊥ and span 180 degrees around the vector.Camera designs that make it possible to obtain this integral of conical integrals are discussed in the next section. Reconstruction Using the New Imaging Model The new imaging model being proposed here can be implemented using the camera design illustrated in Figure 2. In this design, the first detector consists of a single detector element and is surrounded by a coplanar semicircle-shaped second detector.In what follows, this design will be called the single first semicircle second (SFSS) design.Recall from Section 3, to reconstruct the line through the distribution using the new imaging model, the conical integrals over a set of cones with a fixed half-angle that share a common apex lying on the line and have axes perpendicular to the line and span 180 degrees around the line need to be integrated.This can be achieved with the SFSS design, because the first detector element can be "seen" from the second detector in a "semicircle of directions".Because the second detector consists of just one element, the physics within the camera would in effect implement the integral in Equation (16).In this case, a measurement bin is defined as the number of photons that interact with the single first detector element, loses a certain amount of energy and then interacts anywhere on the semicircle-shaped second detector.If these measurement bins are obtained for all possible energies, then the function S IILI (Φ, ϕ, ψ) is known for all possible half-angles.Hence, Equation (17) can then be used to reconstruct the integral of the distribution along the line that passes through the first detector element and is perpendicular to the plane that contains both detectors.To avoid a possible misconception, to perform the reconstruction just described, note that the photons interacting with the first detector do not have to emanate from the direction perpendicular to the plane that contains both detectors; rather, the photons can emanate from any direction.From these photons, the integral of the distribution along the line that is perpendicular to the plane that contains both detectors can be reconstructed.Once the integral along this line has been produced, a three-dimensional reconstruction of the distribution can be performed by using conventional tomographic techniques. Single First Detector Ring Second Detector Design Sensitivity is defined to be the fraction of the photons that emanate from a radioactive source that are actually detected and used in the image formation process.To produce quality images, it is desirable to develop imaging systems with high sensitivity.The sensitivity of the SFSS design can be approximately doubled.By noting the symmetry involved, Equation ( 16) can be rewritten as: This equation allows the second detector of the SFSS design to be extended from a semicircle to a full circle, as illustrated in Figure 3; thus approximately doubling the sensitivity of the design.It could be said that the first detector is "seen" from the second detector in a "circle of directions".This design is referred to as the single first ring second (SFRS) design. The SFRS camera design can be extended to facilitate the reconstruction of parallel projections.Forming a single larger camera by arranging multiple SFRS cameras in a two-dimensional array would allow a parallel projection of the distribution to be made with just one positioning of the camera.This design, which is illustrated in Figure 4, is referred to here as the multiple single first ring second (MSFRS) design.To reduce false coincidences, the second detectors could be shielded from the photons emanating from the patient.As is well understood, producing parallel projections as the camera is rotated about the distribution would allow a three-dimensional reconstruction to be made. Beach Ball Design: A Camera for Fan-Beam Reconstruction A fan-beam projection of the distribution of radioactivity can be produced by arranging multiple semicircle-shaped second detectors so they lie on a hemisphere and are centered on a single first detector element.Three such second detectors are illustrated in Figure 5.Using Equation (17) on the data obtained from each second detector allows the integral of the distribution along the line that is perpendicular to the detector and intersects the first detector element to be reconstructed.As seen in Figure 5, the arrangement of the second detector elements results in a set of line-integrals being reconstructed that are all perpendicular to the line that contains the end points of the second detector elements and intersects the first detector element.This set of line-integrals is a fan-beam projection of the distribution.Since the resulting camera resembles half of a beach ball, this camera design is referred to as the beach ball design. First Detector Figure 5. Illustration of the beach ball camera design.In this figure, three multiple semicircle-shaped second detectors are arranged so that they lie on a hemisphere that is centered on a single first detector element.Using Equation (17) on the data obtained from each second detector allows the integral of the distribution along the line that intersects the first detector element and is perpendicular to the plane that contains the second detector to be reconstructed.Three such lines are illustrated in the figure .The beach ball design can be extended to facilitate a three-dimensional reconstruction of a distribution of radioactivity.If multiple beach balls are arranged as shown in Figure 6 and are rotated about the distribution of radioactivity, then a fan-beam reconstruction can be performed on multiple planes that are perpendicular to the axis of rotation of the motion.This is called the "multiple beach ball" camera design.This design would result in a data collection geometry that is similar to that of the multi-slice fan-beam collimator used in conventional SPECT [41,42].This design would allow a three-dimensional reconstruction of the distribution of radioactivity to be made. Axis of rotation Figure 6.If multiple beach balls are arranged as shown in the figure and rotated about the distribution of radioactivity, then a fan-beam reconstruction can be performed on multiple planes that are perpendicular to the axis of the rotation; thus making a three-dimensional reconstruction possible.In practice, the axis of rotation of the camera would typically be the axis of symmetry of the patient. Methodology of Computer Simulations Computer simulations were performed to further verify the imaging model and inversion method presented in Section 3. Discrete approximations to the equations presented in Section 3 were made, and computer code was written to implement them.A mathematical phantom, which consists of five constant grayscale ellipsoids, was used in simulating the data.The phantom is similar to a head phantom previously proposed [43].The ellipsoids are listed in Table 1.For example, the lengths of the semi-principal axes of the ellipsoid that represents the skull's outer boundary are 0.75, 0.75 and 1.00 units.To simulate the reconstruction of line-integrals from Compton camera data, the MSFRS camera design was used.The MSFRS camera consisted of 10 4 coplanar SFSS cameras arranged in a 100 × 100 rectangular grid and was located 2.2 units from the center of the phantom.For each first and second detector element pair, 46 cone integrals of the distribution, which were equally spaced between 0 and 180 degrees, were calculated.The SFSS cameras were spaced 0.03 units apart.As a consequence, the whole parallel projection of the phantom was reconstructed (e.g., the projection was not truncated).Thus, a total of 10 4 line-integrals was calculated. Table 1.The five phantoms that comprise the mathematical phantom used in this paper. Ellipsoid Origin Axes Rotation Gray Level Results of Computer Simulations The results of computer simulations to demonstrate the imaging model and inversion method previously discussed in Section 3 are presented here.A parallel projection of the phantom was reconstructed using the IILI imaging model for the data.To help illustrate the overall accuracy of the projection, the reconstruction is displayed as an image alongside the image of the "true" projection.Furthermore, to illustrate the accuracy of the reconstructed values more quantitatively, the reconstructed line-integral values and the true line-integral values along a one-dimensional slice of the projection through its center are presented in a graph. The two images in Figure 7 provide a comparison of the "true" parallel projection of the phantom with the reconstructed one.In neither image can all five ellipsoids be seen, because the grayscale values of these images are line-integrals of the phantom, not voxel values.The image on the left is the true parallel projection of the phantom.This projection was obtained by numerically integrating the phantom in the direction collinear to the y-axis.The image on the right is the reconstructed projection.This image is seen to be largely similar to the true image.In particular, no distortion of the boundaries that comprise the image can be seen in the reconstructed projection.In the true image, the ball located at the bottom of the phantom (Ellipsoid 5) is faint, but the lower half is observable.However, this ball cannot be seen in the reconstructed image.A close inspection of the images reveals some blurring of Ellipsoids 3 and 4. The grayscale windowing values for true image are [0.85,1.12], and the windowing values for the reconstructed image are [0.75, 1.07]. Figure 7.Comparison of the "true" parallel projection (left) with the reconstructed projection using the IILI model for the data (right).The reconstructed parallel projection is seen to be similar to the true parallel projection.To avoid misinterpreting these images, note that the grayscale values of these images are line-integrals of the phantom not voxel values. In Figure 8, a graph of the "true" line-integrals values (solid line) of the phantom and a graph of the reconstructed line-integrals values (dashed line) are presented.In particular, the graphs are the values of the line-integrals along the vertical cross-section through the center of the parallel projection.Largely speaking, the two graphs are similar within the support of the phantom, although the graph of the reconstructed values appears to be a smooth version of the graph of the true values.This smoothness is expected given the quadratures used to implement the inversion formulas.Moreover, it can be seen that the reconstructed values are typically lower than the true values with a relative error of about five percent.The MSFRS camera design has advantages and disadvantages.All would agree that the lack of sensitivity would be a major disadvantage of the MSFRS design.Increasing the number of first detectors per area on the face of the camera will improve the sensitivity of the camera.An increase in the density of first detectors on the face of the camera can be achieved in a number of ways.Decreasing the distance between the first and second detectors will make it possible to increase the density of the first detectors on the face of the camera.Stacking and/or overlapping the SFRS cameras that comprise the MSFRS camera can also increase the density.Alternatively, SFSS cameras can be stacked and/or overlapped to form a camera with more sensitivity.However, the amount of stacking that can be achieved in practice is limited by the need to keep the first detectors as close to the distribution of radioactivity as possible to prevent degradation of the camera's accuracy.On the other hand, the MSFRS camera design does have an advantage.A problem with the conventional camera design is the false coincidences caused by gamma rays being scattered from the second detector back into the first detector [44].An advantage of the MSFRS design is that this source of false coincidences is substantially reduced, in comparison to the conventional camera design, because there are fewer second detector elements for each first detector element in the MSFRS design. Beach Ball Camera Design The beach ball design has an advantage and a disadvantage relative to the SFRS design.Since the second detector surrounds the first, the beach ball design has a significant increase in sensitivity relative to the SFRS design.However, this will be at the cost of increased false coincidences caused by gamma rays being scattered from the second detector back into the first detector.Like the MSFRS camera, the sensitivity of the multiple beach ball design can be improved by increasing the number of first detector elements per area on the face of the camera.This can be achieved by reducing the distance between the first detector and the second detector. A disadvantage that both the beach ball and the MSRFS designs have in common is that the camera has to be moved around the object to perform a three-dimensional reconstruction.Another method for performing a three-dimensional reconstruction from Compton camera data, which does not require the motion of the camera, has been previously proposed [36].To perform this reconstruction, however, both detectors, which comprise the camera, have to be an "infinitely-extending plane". Advantages and Disadvantages of the New Imaging Model and Camera Designs Initially in this section, the advantages of the new imaging model and camera designs are discussed.Finally, in this section, a major disadvantage of such imaging is discussed. Mitigating the Effects of the Finite Spatial Resolution of Detectors It is known that the finite spatial resolution of detectors, the finite energy resolution of detectors and Doppler broadening degrade the performance of Compton cameras.A recent study [33] has found that of these three factors, the finite spatial resolution of the detectors is the dominant degrading factor at a higher medical diagnostic energy level (511 keV).As a consequence, mitigating the effects of the finite spatial resolution of detectors has the potential of significantly improving the performance of Compton cameras.Unfortunately, the camera designs proposed in this paper will not totally eliminate the image degradation due to the finite spatial resolution of the detectors.The finite width of ring detectors and first detector elements will degrade the performance of the cameras.Hence, the imaging model proposed in this paper cannot totally eliminate the degradation due to the finite spatial resolution detectors.Nonetheless, they have the potential to significantly mitigate the degradation. Reducing the Amount of Data Measured and Improving Its Quality With the MSFRS camera design, the multiple second detector elements have in effect been combined into one element.This reduction will reduce the amount of data that need to be measured.A number of iterative algorithms have been proposed for reconstruction from Compton camera data [45][46][47][48][49].The importance of reducing the amount of data that needs to be processed is especially important if an iterative algorithm is used for the reconstruction.Furthermore, the larger measurement bins will result in more photons being counted per bin, thus improving the quality of the measurements. Increasing the Flexibility of Detector Designs Using the image model proposed here would provide more flexibility in designing the detectors that comprise a Compton camera.Both microstrip and pixelated detectors could be used to build a Compton camera.A microstrip detector is a detector where a family of narrow parallel strips of electrodes is fabricated on one side of the detector and a second family of strips, which runs orthogonally to the first, is fabricated on the opposite side [50].The current generated in both families of strips when a photon interacts with the detector is used to determine the point of interaction of the photon.In contrast, when the IILI model is used, a second family of strips along with the associated readout electronics is not needed. Although reconstructions can be made from beach ball cameras with only one family of strips, there may be an advantage of using beach ball cameras with the conventional two families of strips.Previously-developed completeness conditions imply that a cone-beam projection of the distribution of radioactivity would be obtained from a "double-striped beach ball camera" if the ILI model were assumed for the Compton camera data [38]. A cone-beam projection would provide more information than the fan-beam projection produced from the single-striped beach ball camera.However, it should be noted that the number of counts per measurement bin would be smaller with the double-striped camera than with the single-striped camera. Alternatively, if pixelated detectors rather than detectors with strips are used to build a Compton camera and if the imaging model proposed here were not used, then to obtain the best possible image quality, the detectors would be fabricated with pixels that are as small as possible.However, if the imaging model proposed here were used, then the detectors could be fabricated using pixels with the most advantageous size, rather than the smallest size possible.For example, a semicircle detector could be realized by tiling together a number of smaller flat detectors into an approximate semicircle shape.If it is desirable for each flat detector to consist of just one pixel, then the number and the size of the flat detectors could be selected in an advantageous fashion. An Disadvantage: Loss of Sensitivity A camera design with relatively poor sensitivity, at best, would result in the need for an increased imaging time and, at worst, would make the design impractical.Although the designs proposed here are intended for imaging photons higher than the 140 keV photons that are presently imaged with collimated Anger cameras, it is informative to compare the sensitivity of the proposed camera to that of the conventional collimated Anger camera.A major loss of sensitivity with the conventional camera equipped with a parallel collimator is that the collimator stops most of the emanating photons that are not approximately perpendicular to the face of the camera.In contrast, the designs proposed here can use any photon that interacts with the first detector no matter from which direction the photon emanated.This is an advantage of the proposed designs over the conventional collimator design.There are at least three aspects of the designs proposed here, however, that limit sensitivity.First, once the photon interacts with the detector, the photon must travel in just the right direction so that it interacts with the second detector.The beach ball design has an advantage over the MSFRS camera in this respect.Secondly, the area of the first detector must be small.The imaging model proposed here assumes its area is infinitesimally small.Thirdly, the number of first detectors per area on the face of the proposed camera limits the sensitivity of the proposed designs.If a photon does not interact with a first detector, the proposed cameras cannot make use of the photon.Unless these aspects are properly addressed, it seems likely that the camera designs proposed here will exhibit inferior sensitivity to that of the conventional collimated Anger camera. Nonetheless, proposing new Compton camera designs that reduce sensitivity is not without precedent.Equipping Compton cameras with parallel plate collimators [51] and multiple pinhole collimators [52] has been proposed.More recently, equipping Compton cameras with an innovative collimator have been proposed [53].The collimator has multiple pinholes in its center region and large area slats in its outer region.Unfortunately, the addition of a collimator of any sort will reduce the sensitivity of the camera.The new imaging model proposed here has the potential of mitigating the degradation caused by the finite spatial resolution of detectors, but unfortunately, the camera designs proposed here that can be used to implement the model will significantly reduce the sensitivity of the camera.In the future, more work is needed to determine if the use of the new imaging model will improve the imaging capabilities of Compton cameras despite the loss of sensitivity caused by the use of the new camera designs.Regardless of the outcome of this work, the results presented here illustrate that new models for imaging from Compton scatters are possible and motivates the development of further models that could be more advantageous than the ones already developed. Conclusions A new imaging model for Compton cameras has been proposed.The model has the potential of mitigating the loss of accuracy due to the finite spatial resolution of detectors, decreasing the amount of data that needs to be processed and simplifying the construction of detectors for Compton cameras.The results of a computer simulation indicate that the new imaging model can produce reasonable images, at least when noiseless simulated data are used.The implementation of the new imaging model, however, requires new camera designs.Unfortunately, the camera designs presented here will result in significant loss in sensitivity in comparison to the conventional parallel planar Compton camera design.In the future, more work is needed to determine if the use of the new imaging model will improve the imaging capabilities of Compton cameras despite the loss of sensitivity caused by the use of the new camera designs.Regardless of the outcome of this work, the results presented here illustrate that new models for imaging from Compton scatters are possible and motivate the development of further models that could be more advantageous than the ones already developed. Figure 1 . Figure 1.Illustration of the ideal physics of Compton cameras. Figure 2 . Figure 2. The single first semicircle second (SFSS) camera design consists of a first detector element and a second detector shaped as a semicircle.This figure illustrates that the integral of the integral of cone-beam line-integrals (IILI) model can be implemented.The distribution of radioactivity would lie in front of the plane that contains the detectors. Figure 3 . Figure 3.This figure illustrates the single first ring second (SFRS) camera design.The first detector consists of a single first detector element, and the second detector is a coplanar ring centered on the first detector. Figure 4 . Figure 4. Illustration of the multiple single first ring second (MSFRS) camera design.Multiple SRFS cameras are in a two-dimensional array allowing a parallel projection of the distribution of radioactivity. Figure 8 . Figure 8.Comparison of the reconstructed line-integrals (dashed line) assuming the IILI model for the data with the "true" line-integrals values (solid line).The reconstructed values are seen to be similar to the true values.The graphs are the values of the line-integrals along the vertical cross-section through the center of the parallel projections shown in Figure7.
7,422.8
2015-10-27T00:00:00.000
[ "Physics", "Mathematics" ]
Dinucleotide biases in RNA viruses that infect vertebrates or invertebrates ABSTRACT CpG and UpA dinucleotides are under-represented in vertebrate genomes, whereas most invertebrates only show a bias against UpA. RNA viruses are thought to have evolved genomes that resemble the dinucleotide composition of their hosts, possibly to avoid restriction by the zinc-finger antiviral protein (ZAP). By performing a comprehensive analysis of RNA viruses, we show that, whereas UpA dinucleotides are similarly under-represented irrespective of viral genome composition or host, important differences are observed for CpG. The tendency for vertebrate-infecting viruses to have stronger CpG bias than invertebrate-infecting viruses is not universal. Rather, it is mainly driven by single-stranded (ss) RNA(+) viruses. Conversely, ssRNA(−) viruses have a dinucleotide composition that is unrelated to the host clade. Also, these viruses, especially those in the order Bunyavirales, are extremely CpG-depleted. By focusing on specific viral families, we also show that, even for vertebrate ssRNA(+) viruses, ZAP is unlikely to be a driver of CpG depletion. Consistently, CpG dinucleotides tend to be preferentially depleted in A/U-rich contexts in both vertebrate- and invertebrate-infecting viruses. Finally, within the same viral genomes, individual viral open reading frames (ORFs) can display different CpG content. Analysis of SARS-CoV-2 revealed a remarkable depletion of CpG dinucleotides in ORF1ab and S, but not in N and M. Thus, these results do not support the view that an adaptive shift for CpG depletion in the SARS-CoV-2 lineage occurred as an innate immunity evasion strategy. Our data provide a better understanding of viral evolution and inform approaches based on the modulation of CpG to generate attenuated viruses. IMPORTANCE Akin to a molecular signature, dinucleotide composition can be exploited by the zinc-finger antiviral protein (ZAP) to restrict CpG-rich (and UpA-rich) RNA viruses. ZAP evolved in tetrapods, and it is not encoded by invertebrates and fish. Because a systematic analysis is missing, we analyzed the genomes of RNA viruses that infect vertebrates or invertebrates. We show that vertebrate single-stranded (ss) RNA(+) viruses and, to a lesser extent, double-stranded RNA viruses tend to have stronger CpG bias than invertebrate viruses. Conversely, ssRNA(−) viruses have similar dinucleotide composition whether they infect vertebrates or invertebrates. Analysis of ssRNA(+) viruses that infect mammals, reptiles, and fish indicated that ZAP is unlikely to be a major driver of CpG depletion. We also show that, compared to other coronaviruses, the genome of SARS-CoV-2 is not homogeneously CpG-depleted. Our study provides new insights into virus evolution and strategies for recoding RNA virus genomes. interesting that SARS-CoV-2 has different GC contents among S, N and M genes.However, the authors should address the following concerns to fortify the manuscript. Major concerns; 1. Line 33, an immune evasion strategy.This is for an evasion strategy against ZAP.However, ZAP is an effector of the "innate" immune response against virus infection.The same thing is found lines 363 and 365.These (and more if there are) are required to be revised.2. Figs. 2 and 3.You should mention dsRNA, ssRNA (-) and ssRNA (+) like Fig. 5.It is hard to understand which virus the authors show. Preparing Revision Guidelines To submit your modified manuscript, log onto the eJP submission site at https://spectrum.msubmit.net/cgi-bin/main.plex.Go to Author Tasks and click the appropriate manuscript title to begin the revision process.The information that you entered when you first submitted the paper will be displayed.Please update the information as necessary.Here are a few examples of required updates that authors must address: • Point-by-point responses to the issues raised by the reviewers in a file named "Response to Reviewers," NOT IN YOUR COVER LETTER. • Upload a compare copy of the manuscript (without figures) as a "Marked-Up Manuscript" file. • Each figure must be uploaded as a separate file, and any multipanel figures must be assembled into one file.For complete guidelines on revision requirements, please see the journal Submission and Review Process requirements at https://journals.asm.org/journal/Spectrum/submission-review-process.Submissions of a paper that does not conform to Microbiology Spectrum guidelines will delay acceptance of your manuscript." Please return the manuscript within 60 days; if you cannot complete the modification within this time period, please contact me.If you do not wish to modify the manuscript and prefer to submit it to another journal, please notify me of your decision immediately so that the manuscript may be formally withdrawn from consideration by Microbiology Spectrum. If your manuscript is accepted for publication, you will be contacted separately about payment when the proofs are issued; please follow the instructions in that e-mail.Arrangements for payment must be made before your article is published.For a complete list of Publication Fees, including supplemental material costs, please visit our website. Corresponding authors may join or renew ASM membership to obtain discounts on publication fees.Need to upgrade your membership level?Please contact Customer Service at<EMAIL_ADDRESS>Thank you for submitting your paper to Microbiology Spectrum. Dinucleo des biases in RNA viruses that infect vertebrates or invertebrates. In this ar cle Forni et al analyse the CpG and UpA dinucleo de usage in various RNA viruses and conclude that ZAP can not be the sole agent responsible for the suppression of CpG.This ar cle follows upon the 2013 ar cle by Simmonds where it was clearly showed that invertebrates do not suppress CpG and therefore viruses that infect them also tend to have a higher CpG content.I think it would have been nice to start from there and men on this ar cle in more details, because the proposed publica on by Forni et al. follows directly upon it. I like that it goes in details for each viral family but where I think the authors could elaborate further is in the analyses of the CpG rich regions and include an analysis of the conserva on of this sequence in the viral families.Certainly their conclusion suggests this and a deeper analysis of these sequences: are they conserved domains of important proteins?Or perhaps the CpG here contributes to overall structure, would make for a stronger argument.To me without an analysis of this kind the ar cle concludes li le more than what was already established by Simmonds in 2013. About ZAP, at the moment we do not know what exactly triggers ZAP, there are abundant informa on to show CpG does but it is not the only mo f, and if we look at transcriptome of even mammalian cells, it is clear that some transcripts are CpG rich yet they do not seem to induce ZAP or maybe they do and we do not have evidence for this yet.So it is difficult to ascertain anything regarding binding to ZAP because so much is missing.Which is why if the authors want to claim that CpG pa erns in viruses are not en rely linked to ZAP ac vity they must strengthen their argument in favour of phylogeny with more analyses. Posi ve: I think the ar cle is clearly wri en, the analyses are correct. Minor correc on: increase the font of the Y axis so that we can see strasight away if we look at CpG or UpA or find another way to add this in the legend so that it is more evident. Forni and colleagues comprehensively analyzed RNA viruses that infect vertebrates and/or invertebrates to determine biases for CpG and UpA dinucleotide contexts.They showed vertebrate-infecting viruses, particularly single-stranded, plus-stranded RNA viruses, have relatively stronger CpG bias.Interestingly, the CpG ratio is variable in each virus as shown in Fig. 3.It is also interesting that SARS-CoV-2 has different GC contents among S, N and M genes.However, the authors should address the following concerns to fortify the manuscript. Major concerns; Reviewer #1 In this article Forni et al analyse the CpG and UpA dinucleotide usage in various RNA viruses and conclude that ZAP can not be the sole agent responsible for the suppression of CpG.This article follows upon the 2013 article by Simmonds where it was clearly showed that invertebrates do not suppress CpG and therefore viruses that infect them also tend to have a higher CpG content.I think it would have been nice to start from there and mention this article in more details, because the proposed publication by Forni et al. follows directly upon it.>>> RE: We are grateful to the Reviewer for their comments on our manuscript and for thoughtful suggestion.We agree that our manuscript builds on data presented by Simmonds and co-workers, although it reaches different conclusions.As suggested, we have now mentioned Simmonds' work in more detail, as follows: "For instance, Simmonds and co-workers analyzed the representation of CpG dinucleotides in the genomes of RNA and small DNA viruses that infect mammals and insects (which do not possess ZAP) (7).They found no CpG depletion among insect viruses.Conversely, mammalian RNA viruses with single stranded genomes and reverse transcribing viruses, but not dsRNA viruses, showed CpG suppression.Specifically, CpG depletion in these viruses was related to the G+C composition of their genomes.The authors thus concluded that mammal-infecting RNA viruses that expose their genetic material to the cytoplasm are subject to selection against CpG".I like that it goes in details for each viral family but where I think the authors could elaborate further is in the analyses of the CpG rich regions and include an analysis of the conservation of this sequence in the viral families.Certainly their conclusion suggests this and a deeper analysis of these sequences: are they conserved domains of important proteins?Or perhaps the CpG here contributes to overall structure, would make for a stronger argument.To me without an analysis of this kind the article concludes little more than what was already established by Simmonds in 2013.>>> RE: Thank you for raising this interesting point.We agree that an analysis of CpG conservation across viral gene phylogenies is a good strategy to obtain further insight.We thus selected two viral genera (Mammarenavirus, ssRNA(-) and Betacoronavirus, ssRNA(+)) and the two viral genes showing the highest CpG content in the respective genomes.We generated nucleotide alignments and we counted the fraction of sequences sharing each CpG dinucleotide.As a comparison, the same procedure was applied to GpC dinucleotides.Results indicated that CpG dinucleotides are significantly less conserved than GpC dinucleotides both in the mammarenavirus L gene and in the betacoronavirus M gene.In the L gene, we checked for differences among regions that encode or do not encode known protein domains.Overall, we conclude that CpG dinucleotides are either lost by mutation biases or selected against in these viral genes, irrespective of their location.The results, discussion and methods were updated to include these data, which are summarized in Figure 7. About ZAP, at the moment we do not know what exactly triggers ZAP, there are abundant information to show CpG does but it is not the only motif, and if we look at transcriptome of even mammalian cells, it is clear that some transcripts are CpG rich yet they do not seem to induce ZAP or maybe they do and we do not have evidence for this yet.So it is difficult to ascertain anything regarding binding to ZAP because so much is missing.Which is why if the authors want to claim that CpG patterns in viruses are not entirely linked to ZAP activity they must strengthen their argument in favour of phylogeny with more analyses.>>> RE: Thank you so much for this comment.We fully agree that we still miss many details about the function of ZAP and the mechanisms underlying its binding and induction.Nonetheless, we consider that, whatever its binding specificity, restriction by ZAP cannot explain CpG depletion in organisms (invertebrates and fish) that possess no ZAP ortholog.For instance, the binding specificity of ZAP cannot explain why bunyaviruses that infect vertebrates and invertebrates are similarly CpG depleted and why picornaviruses infecting fish and reptiles have similar CpG representation.This said, we have now included additional analyses of CpG (and GpC) conservation across the phylogenies of mammarenaviruses and betacoronaviruses. Positive: I think the article is clearly written, the analyses are correct.>>> RE: Thank you so much for your appreciation of our work Minor correction: increase the font of the Y axis so that we can see strasight away if we look at CpG or UpA or find another way to add this in the legend so that it is more evident.>>> RE: We have increased the Y axis font.In figures 1 and 2 we have denoted CpG plots with a black frame and UpA plots with a blue frame. Reviewer #2 (Comments for the Author): Forni and colleagues comprehensively analyzed RNA viruses that infect vertebrates and/or invertebrates to determine biases for CpG and UpA dinucleotide contexts.They showed vertebrateinfecting viruses, particularly single-stranded, plus-stranded RNA viruses, have relatively stronger CpG bias.Interestingly, the CpG ratio is variable in each virus as shown in Fig. 3.It is also interesting that SARS-CoV-2 has different GC contents among S, N and M genes.However, the authors should address the following concerns to fortify the manuscript. Major concerns; 1. Line 33, an immune evasion strategy.This is for an evasion strategy against ZAP.However, ZAP is an effector of the "innate" immune response against virus infection.The same thing is found lines 363 and 365.These (and more if there are) are required to be revised.>>> RE: We are grateful to the Reviewer for their comments on our manuscript and for thoughtful suggestion.We apologize for the imprecise wording.We have now modified the sentences to make it clear that we are referring to innate immune responses. 2. Figs. 2 and 3.You should mention dsRNA, ssRNA (-) and ssRNA (+) like Fig. 5.It is hard to understand which virus the authors show.>>> RE: Thank you for this observation.In figures 2 and 3, genome composition is coded by the style of the frame, as per legend. Minor concerns; Line 124, "that".Is this "than"?>>> RE: The error was corrected.Thank you.Your manuscript has been accepted, and I am forwarding it to the ASM Journals Department for publication.You will be notified when your proofs are ready to be viewed. The ASM Journals program strives for constant improvement in our submission and publication process.Please tell us how we can improve your experience by taking this quick Author Survey. Publication Fees: We have partnered with Copyright Clearance Center to collect author charges.You will soon receive a message from<EMAIL_ADDRESS>with further instructions.For questions related to paying charges through RightsLink, please contact Copyright Clearance Center by email at<EMAIL_ADDRESS>or toll free at +1.877.622.5543.Hours of operation: 24 hours per day, 7 days per week.Copyright Clearance Center makes every attempt to respond to all emails within 24 hours.For a complete list of Publication Fees, including supplemental material costs, please visit our website. ASM policy requires that data be available to the public upon online posting of the article, so please verify all links to sequence records, if present, and make sure that each number retrieves the full record of the data.If a new accession number is not linked or a link is broken, provide production staff with the correct URL for the record.If the accession numbers for new data are not publicly accessible before the expected online posting of the article, publication of your article may be delayed; please contact the ASM production staff immediately with the expected release date. Corresponding authors may join or renew ASM membership to obtain discounts on publication fees.Need to upgrade your membership level?Please contact Customer Service at<EMAIL_ADDRESS>Thank you for submitting your paper to Spectrum.Sincerely, Takamasa Ueno Editor, Microbiology Spectrum Journals Department American Society for Microbiology 1752 N St., NW Washington, DC 20036 E-mail<EMAIL_ADDRESS>• Manuscript: A .DOC version of the revised manuscript • Figures: Editable, high-resolution, individual figure files are required at revision, TIFF or EPS files are preferred
3,600.8
2023-10-06T00:00:00.000
[ "Biology" ]
Revisiting the Design and Implementation of Gavis: J-gavis GAVIS (Genetic Algorithm based Voice Imitation System), [1], is a system whose purpose is to find a set of effects to imitate voice signals. The system is based on Genetic Algorithms and Fourier Transforms. Although GAVIS proves to be effective, some deficiencies were found, and amongst them was the execution speed of the system. It is to improve such deficiencies that a new version of the system was designed and implemented. In this paper, we present the foundations for GAVIS' construction and a new and improved design and implementation of GAVIS (J-GAVIS) and describe its new features. Introduction Speech recognition and speech processing are subjects that are actively researched nowadays and that are intimately related to the use of voice signals.These topics yield several problems.One of them is the imitation of a target voice by means of audio effects and constitutes the main focus of this article, specifically, finding the mentioned effects that transform a phoneme emitted by a source voice into the same phoneme emitted by a target voice. Solving the problem has several practical applications, for example: It can be used to dub films as to turn the voice of the dubbing actor into the voice of the original actor.It can also be used in advertising, if the voice of a specific person is needed but this person is unavailable. Voice signals were used for several calculations and the Discrete Fourier Transform (from now on known as DFT) was key in offering a representation to manipulate easily such signals.Actually, most of the oper-ations applied on a signal were done through the DFT. Presently, there are systems capable of solving this problem.Therefore, our proposal aims to improve the quality of the results obtained by said existing systems [2], [1]. Multidimensionality of the problem and high variety of solutions in the search space led to the selection of Genetic Algorithms to perform the task.Also, the fact that the expected result is known and that a solution can always be accurately assessed reinforces this choice.Some non-conventional elements which will be discussed have also been added to the Genetic Algorithm as to further aid in the search. The problem of transforming a voice signal into another has a trivial solution.Nonetheless, finding a set of effects which can approximate the DFT of two or more input signals to their respective output signal DFTs is not an easy task. The work here presented consists in a description of the implementation of a model that uses heuristic search methods to find the audio effects needed for the aforementioned transformation and its subsequent improvement. In section 2, a brief overview of Genetic Algorithms will be given.Section 3 will describe some relevant notions on Audio Signals and DFTs.Section 4 describes the design and implementation of the original GAVIS.Section 5 will discuss the deficiencies found in GAVIS and the necessity of a new version along with the new system's design and the implementation of all its relevant features.In section 6, results will be presented, analyzed and compared to the results of the previous version.Finally, we conclude in section 7 and sketch current and future work. Genetic Algorithms Genetic algorithms are a heuristic search technique mainly used for optimizing solutions (i.e.Combinatorial Optimization) to a problem [3], [4].It is strongly based on species evolution.The idea behind the operation of the algorithm is that of the use of several initial solutions and evolving them by evaluating their overall quality and applying genetic operators to make them gradually improve.Although this type of algorithms is good at approximating optimal solutions, it never guarantees to find the optimal solution. Generally, solutions are coded as arrays of values (chromosomes) composed by features of such solutions.Each feature (gene) may be represented as binary numbers or as another data type.This constitutes the valid language for the phenotype or individual, which is an embodiment of a particular potential solution for the problem to be solved. A brief description for each element involved in the algorithm will be given. Phenotype The phenotype is the representation of an individual solution within the search space.It serves as a model for potential solutions to the problem.It may also be referred to as individual. Chromosome A chromosome is an encoding for the phenotype.It is typically a string, although it might have other forms such as graphs, matrices or any data structure depending on the problem to be solved. Population Pool of individuals used for evolving.As a result of the change in the solutions of the pool, the population itself naturally changes.Each step (iteration) in this process is called generation, and constitutes an epoch or a snapshot of the population at a certain time in the execution. Fitness The fitness function evaluates the quality of a given individual.It is commonly used to test the stopping criterion and to pair the individuals in the population who will undergo crossover. Crossover It is a genetic operator that combines individuals and produces offspring solutions that preserve several features of the original individuals. Mutation It is another genetic operator that applies a spontaneous change to a given offspring solution.The main goal of this operator is to maintain a minimum of diversity for exhaustive exploration of the search space. Natural Selection The natural selection criterion is the one for eliminating unfit individuals along the execution of the algorithm.The implementation of this operation usually holds a close relationship with the fitness function, but may also be defined in a different manner. Termination criterion A criterion that must be met to end the execution of the algorithm.Generally, this criterion might be associated to the quality of the best solution, convergence of the population, fixed number of iterations or a combination of the mentioned criteria.Some parameters are needed for the Genetic Algorithm to be fully defined such as Population size, Crossover probability P c , Mutation probability P m among others. A psuedocode for a Genetic Algorithm is as follows: 1. Initial population creation (usually random) DFTs and Voice Signals A DFT is a tool that allows the visualization of a signal in the frequency domain [5], [6], [7], and it is defined as: where: • N is the number of samples of the signal. • f k is the k th component of the array that represents the DFT. • s n is the n th component of the input signal expressed in the time domain. It must be noted that each f k and each s n is a complex number.The reason for using a DFT is that the alterations made on an array represent a sound effect if the array is a DFT.If the array represented a signal in the time domain, said alterations would result in meaningless distortion and misplacement of the sound waves, which is not a useful reconfiguration of the signal. The original GAVIS -Design and Implementation The system was originally designed to find a set of sound effects needed to convert one voice into another.It was found that the core of the problem is that of an array converter.The idea of the system is to find a set of changes such that when they are applied to source values, a good approximation of the target values is obtained.These changes use sections of the array.Basically, a random section of the array is chosen and then the values in that section are added to the values in another area of the array. More specifically, the mentioned changes are based on three concepts: • Initial section: Has start and end index that define a portion of the array. • Destination section: Is defined by a starting index where the initial section will be added.This area is selected randomly, but it must fit within the boundaries of the array.The following image shows how the initial section is added to the destination section: • Coefficients: The initial section, destination section and/or the values that will be added are multiplied by a certain factor. These changes are actually also sound effects that generate some transformation when they are applied to the DFT of a sound signal.It is to be noted that any number of said effects may be used on the DFT of any signal.This is done by processing each transformation sequentially, which means that each effect makes its own changes on the DFT of a signal that has already undergone changes due to the processing of all previous effects. In this version of the system, an individual is partly composed by an array of chromosomes.Each chromosome contains several parameters that describe a sound effect as it was proposed.This poses a problem, in the sense that all the values are linked and a change in one of them may produce an invalid chromosome.The corresponding mutation operator addresses this problem, and it will be described later in this section.The chromosome's parameters will be briefly named and explained: • Left limit: It is a random integer value between the initial and final indices of the array. • Right limit: It is generated in the same way than left limit, but if its value is lower than left limit's, they are swapped.Both left limit and right limit are the boundaries of the initial section. • Target: Is a randomly generated integer value that will act as the starting point for the destination section. • Initial section coefficient: The factor by which the initial section is multiplied. • Destination section coefficient: Coefficient that multiplies the destination section. • Added section coefficient: The initial section is stored in an auxiliary array, this array is then multiplied by the added section coefficient and furthermore added to the destination section.To generate all three mentioned coefficients, the first step is to generate a random boolean value to determine whether the coefficient should be a number between 0 and 1 or whether it should be a number between 1 and a given maximum coefficient value.Next step is to generate another random boolean value that will decide if the overall value of the coefficient will be multiplied by -1 or not.The reasons behind the steps used to generate the coefficients in the chromosome, are that negative coefficients may be needed, and to avoid a bias in the population by having a greater chance of obtaining a coefficient that multiplies than one that divides. Other than the chromosomes, an individual has variables that store the fitness and age. In order to obtain the fitness of an individual, the effects in the chromosomes are sequentially applied to the DFT of the source signal and then a Euclidean distance d is calculated between the resulting array and the array that represents the DFT of the target signal.This process is summarized in the following equation: where: • d is the Euclidean distance • n is the number of chromosomes assigned to the individual • x i is the i th component of the array x, being x the original array with all the effects applied to it • y i is the i th component of the array y, being y the target array A feature to be noted is the use of a lifespan (age) variable for individuals.This lifespan is measured in terms of iterations of the main algorithm.The variable is used to eliminate "old" individuals, being the purpose to restrict excessive dissemination of a good individual's characteristics to avoid premature convergence.When an individual "dies" from age, it is replaced by a new randomly generated individual. The two main genetic operators, crossover and mutation, are present in this implementation of the system and are defined as follows: 1. Crossover: Selects the individuals to be combined and generates a single new individual.The offspring individual inherits a chromosome i from one of its parent individuals.The source of the chromosome is decided by a boolean random number.It is important to note that the location of the inherited chromosome is the same in the offspring individual.Figure 1 shows the process on a couple of random arrays. The phenotypes are initially sorted by their fitness value.Then, for each phenotype in the population a companion is chosen using a "crossover distance" which is described by the following equations: where (1) represents a normal inverse cumulative distribution function, and (2) denotes p. P c is 1 under this implementation. 2. Mutation: Applies a random transformation on one or more of the parameters of a given chromosome.If the transformation affects left limit and/or right limit, target must be recalculated.The values produced by the transformation are obtained in the same manner that the initial values of the individual were generated.In this case, the mutation's objective is to un-link the parameters inside a chromosome and to diversify the population.Mutation is activated with probability P m when offspring is spawned from crossover. Due to crossover, each iteration doubles the size of the population.Therefore, to maintain a fixed population size, the offspring is evaluated, then the population is sorted again by its fitness and the worst half of the population is discarded.This operation constitutes Natural Selection simulation for the implementation.Also, this implementation stops when a certain number of iterations was completed.It stores a copy of the best individual found across all generations.This version of the system was implemented in Matlab R R2010b [8]. The new version: J-GAVIS Having completed the implementation and testing on GAVIS, some defects were found: • The system's execution speed was really slow. • Convergence took a very long time. • Although the obtained solution was good, better solutions could be obtained. We sought to address these deficiencies by the following means: • Changing the programming language. • Redefining some of the elements in the system. The change in the programming language was of great aid.We chose Java as the new language for implementation due to its portability and ready-to-use functionalities.Another factor which favored the election of Java as the new language was its comparative speed of execution compared to Matlab.In the next section, some comparative results between both versions will be shown. As far as the redefinition of elements in the algorithm, we could identify a couple of key elements which needed to be redesigned.In this section, we shall describe and explain both of the elements which were redesigned. Changes First of all, we redesigned the representation of a change in the signal.This was due to the fact that a change had to be more versatile than a section of the array.We found that transporting sections of the array made the combination spectrum way too wide since the search needed to find the adequate size of the section and also the adequate section.Therefore, the new structure of a change is as follows: • Initial index : Refers to a specific value in the array at a certain index.This value ranges from 0 to array length. • Offset: Defines a distance from the initial index.It must be noted that offset may have values in the range {−array length, +array length}. • Destination index : It is the number where initial index will be transported as to be added.The value is determined by initial index + offset.This value may be outside of the array's boundaries. • Gain: The value that will be added is multiplied by this coefficient. The following figure shows the process in which a change is made on an array under the new implementation: As in the previous version of the system, the changes are actually also sound effects that generate some transformation when they are applied to the DFT of a sound signal.Also, the application of the changes is still cumulative. The new version of the changes were created in order to lower the number of operations performed by a gene within each individual.This new approach achieves a more fine-grained evolution on each individual and yields a "high definition" individual. This choice allows a greater freedom when crossover is executed since the new individuals have more elements than in the original version, leading to a greater number of possible crossover points. An individual under the new implementation here presented is still defined as being partly composed by an array of chromosomes.However, it was greatly simplified from the previous version. As in the former version of the system, an individual has variables that store the fitness and an age. It is to be noted that under this implementation of the system, the mutation operator is now irrelevant due to the number of new genes that enter the population by means of the process mentioned above. Fitness function The original fitness function of an individual was modified to penalize changes that occur outside of the boundaries of the array.Nonetheless, the main concept of the function was preserved and still refers to a total Euclidean distance d between the array with the effects already applied on it and the array representing the DFT of the target signal.The following equation shows how the value is calculated: where: • d is the Euclidean distance • n is the number of chromosomes assigned to the individual • x i is the i th component of the array x, being x the original array with all the effects applied to it • y i is the i th component of the array y, being y the target array • f is a scalar factor used as a coefficient for penalty It may be observed in (3) that i may have values lower than 1 and greater than n.The reason for using the values outside the boundaries of the array is for the algorithm to learn proper values for the offset parameter and to randomly nullify certain changes. The Natural Selection simulation model was preserved from the previous version.Nonetheless, some alternatives were tested before making this choice: • Eliminating a uniformly distributed group of individuals. • Maintaining only the offspring individuals (crossover was different and two new solutions were spawned instead of one). Neither of these alternatives proved to be better than eliminating the worst half of the population.Also it is to be emphasized that the age helped to keep the diversity in the population across all iterations since it forces new individuals to replace old ones. Results Several results were obtained from tests executed on the system.The instances of the tests all included two input arrays and two destination arrays.Each execution used the following parameters: These results show that J-GAVIS' performance is clearly superior to that of GAVIS.These improvements can be explained due to the following reasons: • The better results in quality of the solution may be explained by the new structure of the changes that are being used in the new implementation.As mentioned previously, the changes now target single element in the array, instead of a full section of the array.By operating in this manner, greater precision is achieved since changing a single element in the array yields a lower impact variation whereas using a whole section of the array for the change has a higher impact.Hence, two positive effects may be observed when using the new embodiment of the changes: 1.More subtle transformations are achieved by using only one of the components of the array.The solutions that are obtained are more fine-grained. 2. Using a section of an array to implement transformations has the risk of changing a vicinity of the component which needed to be improved.Potentially, this vicinity had high quality but was being overwritten or modified by the incoming section.Using a single component of the array to represent the change eliminates this "collateral damage". • Improvement in speed can be observed in the performance charts shown in this section.We explain this improvement due to the fact that MATLAB is natively an interpreted language, which makes it execute slower than Java which is not.Another of the reasons why J-GAVIS executed faster than the original GAVIS, is that we found a deficiency in the evaluation process in GAVIS, happening twice in the case of certain individuals, since under the original implementation it was not possible to know when an individual had been freshly evaluated or not.We also found that eliminating the mutation operator (it was not necessary under the new version) did help to make the speed faster. Figures 4 and 5 show the two output arrays for GAVIS and the found solutions.Figures 6 and 7 show the two output arrays for J-GAVIS and the respective solutions.In Figures 4 and 5, the dotted lines represent each transformation of the respective original arrays using the effect described by the best individual found at each test.The solid line in the same Figures denotes the target array.For figures 6 and 7 each color line represents a test and the cyan colored line represents the target signal. Again, it may be noted that the results obtained for J-GAVIS are of greater quality than those obtained by GAVIS.In figure 8 It may be observed that for the first generations of the execution, the distance diminishes in a greater ratio than at later generations.Also, the difference between the fitness of the best individual and the average fitness of the population is also greater toward early generations.This pattern was present in both GAVIS and J-GAVIS. It must be noted that at certain generations, the fitness of the best individual is worse than that of the best individual at a previous generation.This is due to the fact that a better individual did not spawn from the current generation and that the former best individual died of age. The average fitness of the population has a tendency to improve over all generations.This implies that all the population gradually lowers its distance.This tendency was observed in both implementations of the system. Conclusions, current and future work In this work, it has been empirically shown that it is possible to obtain a set of effects such that when it is applied to multiple source signal DFT arrays, the result approximates the corresponding target signal DFT arrays.Based on these results, a conclusion that may be inferred is that using multiple samples of source and target voices, it is possible to find a set of effects which can transform the sound of one voice to another specific one. Another conclusion which may be drawn is that heuristic methods are a good choice to tackle the problem.This is because the search space for this problem is very large and heuristic methods are a convenient alternative to explore search spaces of this type due to their evaluation capability.Another important feature is that these methods can generate good candidate solutions in order to find an approximation close enough to the optimal solution. Current work is being done on the system to test other heuristic methods.A preliminary version based on Ant Colony Optimization [9] has already been tested and the results are promising from an efficiency standpoint, but the correct execution requires the setting of many parameters.The use of other heuristic methods to solve this problem is also envisaged as future work.Nonetheless, genetic algorithms seem to work better than other heuristic methods so far. Different fitness functions and mechanics for genetic operators continue to be explored and tested.It was found that the inclusion of these new features constantly improves the performance of the algorithm and quality of the found solutions.To continue to improve the system, a version in C is planned to be released and changes in the crossover operator, chromosome representation and fitness function are being actively researched. Another feature which could be implemented as future work is the use of statistics in the system to automatically vary P c and P m depending on the population state of an instance of the problem being solved. 2 . Evaluation of each individual in the population 3.While the termination criterion is not met (a) Apply crossover with probability P c to a certain subset of the population (b) Apply mutation with probability P m to each generated offspring solution (c) Evaluation of each individual in the population (d) Verify natural selection criterion on all the individuals in the population and apply elimination when needed (e) Repopulate with new individuals if needed to meet population size 4.Return the best solution found Figure 1 : Figure 1: This figure shows the mechanics of the cross-over operator. • 5 •• Signal length: 15 samples • Number of chromosomes per individual: 300 • Population size: 1000 • P m : 0.Probability of changing a parameter of a chromosome: 0.05 Individual lifespan: 5 generations • Iterations: 1500 These tests were performed on both the previous and the current implementation with the same parameters to make the comparison as fair as possible.The results of the tests are shown in Figures 2 (for GAVIS) and 3 (for J-GAVIS). Figure 8 : Figure 8: Fitness values per generation.The X-Axis represents the generation number and the Y-Axis represents the distance value. the dotted line symbolizes the average of the distance for all the population at each generation.The solid line in the same Figure shows the evaluation (distance) of the best individual at each generation.
5,800.2
2011-12-01T00:00:00.000
[ "Computer Science" ]
Statistical damage constitutive model based on self‐consistent Eshelby method for natural gas hydrate sediments Natural gas hydrate (NGH) is widely distributed in marine sediments and continental permafrost, which is a promising energy resource. The sustainable and safe exploration and production of NGH requires fully understanding of its mechanical behaviors, which is still a challenge to our community. In this paper, a statistical damage constitutive model named SC‐SDCM is developed for NGH sediments. The NGH sediments are regarded as two phases: the matrix phase of solid mineral grains, pore fluid, and gas, and the inclusion phase of hydrate crystal. The effective elastic parameters of such two‐phase composite are estimated by self‐consistent method (SC) according to the equivalent inclusion principle of micromechanics. The mesoscopic element strength of the NGH sediments is described by the Weibull statistical distribution and the damage theory of composite materials. And then combined with the Drucker‐Prager failure criterion of microelement, the damage constitutive model of NGH sediments is established. The new SC‐SDCM is tested to be reliable and robust by triaxial experimental observations on artificial cores and naturally occurring samples under different confining pressures and hydrate saturations. The SC‐SDCM could properly describe the stiffness, peak strength, and strain softening properties of the NGH sediments. Moreover, the predictions of SC‐SDCM are closest to the experiment observations compared to several published constitutive models, especially for the near‐ and after‐damage stages with relatively high hydrate saturation. | INTRODUCTION Natural gas hydrate (NGH) is a kind of solid crystal composed of water and methane and widely gathered in land permafrost and marine sediments. [1][2][3] According to the US Geological Survey, the global reserve of NGH is estimated as about 100 000 to 300 000 000 trillion cubic feet, and the total amount of methane stored in NGH has far exceeded that in proved conventional natural gas reservoirs. [4][5][6] So NGH is an important and promising energy resource. However, the NGH reserves in marine sediments are usually in shallow, loose-compacted formations with low strength. Such reservoir condition leads to high risks of borehole instability and sanding during drilling and production, which bring great challenges to the safe and effective mining of NGH. For example, the Canadian Mallik 2L-38 well in 2007 and the Japanese trial production at Nankai Trough in 2013 were forced to shut off due to serious sand production after 60 hours and 6 days, respectively. [7][8][9][10] Moreover, the strength of hydrate reservoir will further decrease with the decomposition of NGH, which would result in the slip of hydrate sediments to the mining center and subsidence of the seabed, especially for the NGH production on the subsea slopes, and could even trigger geological disasters, such as submarine landslide and tsunami. 1,[11][12][13][14][15][16][17] Therefore, it is of great significance to well understand the mechanical properties of NGH sediments for the safe and sustainable exploitation of NGH. Aiming to better understand the mechanical characteristics of NGH sediments, researchers have carried out lots of experimental tests, especially the triaxial shear tests. Due to the difficulties and high-costs for coring of the naturally occurring NGH sediments, most of the published triaxial tests are done using artificial cores with the skeleton made of Toyoura sand, [18][19][20] Ottawa sand, 21 or quartz sand, 19 and the NGH simulated by ice (eg, Masui et al 18 ) or synthetic methane hydrate (eg, Hyodo et al 22 ). These measurements on the artificial cores indicate that (i) the existence of gas hydrates increased the shear strength of hydrate sediments, and the cohesive strength and the elastic moduli of the sediments are influenced by the NGH saturation and the confining pressure. 18,19,[21][22][23] (ii) The stress-strain curve of artificial NGH cores shows strain stiffening before peak strength and strain softening after peak strength. 24,25 (iii) The type of sand that constitutes the skeleton of the artificial cores has a key effect on the stiffness of the sample, but has almost no effect on the strength of the specimen. 26 In addition, Yun et al 27 used sand soil, silt, and clay to synthesize soil specimens containing methane hydrate, and they observed that the stress-strain relation of hydrate sediments is complex function of sediment particle size, confining pressure and hydrate content. Kajiyama et al 20 used round glass beads and Toyoura sands to make artificial cores with hydrates, and then did triaxial tests, respectively. Though the glass beads and Toyoura sands have similar stiffness, these two types of artificial cores show quite different in their breakage pattern due to the shape of the grains and their roughness. Compared to numerous measurements on artificial cores, the experiments done with naturally occurring NGH samples are still far from enough. To our knowledge, Yoneda et al [28][29][30] has done series of measurements on samples from the seabed of Naikai Trough and the Krishna-Godavari Basin. The mechanical properties obtained by Yoneda et al [28][29][30] are in consistent with those of artificial specimens. Based on experimental measurements, researchers have proposed different types of constitutive models to quantitatively describe the geomechanical behavior of NGH sediments, including nonlinear elastic model, 26 elastic-plastic model, 24,[31][32][33][34] damage statistical constitutive model 35,36 , and critical state model, 25,[37][38][39][40][41] which is an active area in recent years. There are mainly three key issues for the constitutive model of NGH sediments: the effects of saturation and cementation of NGH, the critical state of NGH, and the damage description. Klar 33 established an elastoplastic constitutive model within the state-dependent and shear dilatancy framework. The cementation strength parameter (p b ) was used to quantitatively describe the cementation contribution of NGH, and the degradation of the cementation was expressed as a function of the plastic deformation of sediments. Yan and Wei 34 introduced the concept of effective hydrate saturation to describe the influence of the occurrence mode of NGH, and the hydrate cementation effect was considered in a yield function. Another way to consider the influence of NGH on the mechanical properties of their sediments is to integrate the critical state of NGH phase behavior. Uchida et al 37 established a critical state model with five model parameters to describe the mechanical behavior of NGH sediments. Implicit (ISSF) and explicit analytical strain softening (ESSF) models were developed, respectively, by Pinkert and Grozic 42 and Pinkert et al, 43 which considered ten correlation coefficients for soil skeleton and five parameters related to NGH behavior. This kind of models was extended by introducing different parameters to consider the critical state line, cementation between hydrate and sand particles, and the like. [38][39][40][41]44 By analogy with the damage theory of permafrost, Wu et al 35 established a nonlinear damage constitutive model of NGH sediments based on the assumptions that the microelement strength followed Weibull distribution and controlled by the Drucker-Prager failure criterion. And some researchers also integrated models of the critical state soil mechanics to describe the inelastic mechanism. [45][46][47] Zhang et al 36 further developed the damage statistical constitutive model of NGH sediments to include the influence of damage threshold and residual strength. Although lots of efforts have made on the constitutive model of NGH sediments, it is still not fully solved and far from satisfactory due to its complexity. Basically, NGH sediment is composed of solid mineral grains, hydrate crystals, pore fluid, and gas, which is a kind of multiphase composite material with remarkable heterogeneity in microstructure. Based on the idea of "homogenization" and integrated the information of the meso-structure, mesomechanics of composite materials offers methods to obtain the effective properties of macroscopic homogeneous and meso-heterogeneous media related to the Eshelby's inclusion problem. 48 Such effective inclusion methods mainly include self-consistent scheme, 49-52 differential method 53,54 and Mori-Tanaka method. 55 According to the idea of homogenization, researchers have proposed different methods to model the effective elastic properties of NGH sediments, such as Helgerud et al 56 and Nguyen et al. 57,58 In this paper, we develop a new constitutive model of NGH sediments based on the equivalent inclusion model of mesoscopic mechanics and the statistical damage theory. The NGH sediment is treated as two-phase composite material: solid mineral grains, pore fluid, and gas are the matrix phase (background material), and hydrate crystal is the inclusion phase. First, we will revisit the Eshelby's inclusion problem and predict the effective moduli of NGH sediments with different hydrate saturations by the selfconsistent scheme. Then, we will develop the framework of the constitutive model by integrating the statistical damage theory in mesomechanics and the self-consistent scheme. Finally, the performance of this new constitutive model will be tested by published experimental results and compared with other constitutive models. OF NGH SEDIMENTS BY SELF-CONSISTENT SCHEME The NGH sediments are shallow and loosely compacted under the seabed, which is a typical composite material with four components: solid mineral grains, hydrate crystals, pore fluid, and gas. From the perspective of mesomechanics of composite materials, the NGH sediment is a kind of medium with heterogeneous meso-structure and homogeneous at the macrolevel. Hence, the effective macroscopic properties of NGH sediments can be estimated with the properties of their components and the corresponding volume fractions and the geometric information by self-consistent method based on the theory of equivalent inclusions (as shown in Figure 1). The elastic response caused by an eigenstrain within its matrix is the so-called "inclusion problem," which is the basis of mesomechanics. The concept of eigenstrain 59 is referred to a small inelastic strain caused by some physical or chemical reasons (such as thermal expansion, phase transition, prestrain, or plastic strain) at a subdomain Ω within an infinite isotropic medium R (as shown in Figure 2). Such eigenstrain results in a self-balancing stress, such as thermal stress, phase transformation stress, prestress, and residual stress, which is the eigenstress. Eshelby 48 decomposed the eigenstrain problem and transformed it to the problem with initial stress (-σ * ij ) in subdomain Ω and distribution force (p i ) on its boundary, which actually became an elastic mechanics problem of infinite elastic medium with a concentrated force in it. The distributed force p i on the surface can be regarded as a concentrated force in the infinite medium, and its displacement field can be expressed by the Kelvin solution with Green's function. For such classic inclusion problem, Kinoshita and Mura 60 gave the expression of uniform strain tensor in inhomogeneous matrix phase as follows: where G ikjl is the Green's function of the corresponding displacement tensor caused by unit force applied in a given direction, C klmn is the elastic stiffness tensor of the matrix phase, ε * mn is the eigenstrain tensor. The symmetric tensor of Green's function by Mura 59 is as follows: where K ip (x) = C ijpl x j x l is the Christoffel stiffness tensor in the x direction; x 1 = sinθcosθ/a 1 , x 2 = sinθcosθ/a 2 , x 3 = cosθ/a 3 , A transformation diagram of the mesoheterogeneous NGH sediments (gray as mineral particles, yellow as hydrate crystals, blue as pore fluids and gases) to macroscopic homogeneous media based on the concept of equivalent inclusion a 1 , a 2 , and a 3 are the semiaxes of the ellipsoidal inclusion. θ and φ are the spherical coordinates of the main axis vector x of the ellipsoidal inclusion. According to the Eshelby equivalent inclusion theory, each component of the microscopic heterogeneity structure in the composite material is regarded as an inclusion phase. And based on the volume average of Hooke's law, the effective elastic coefficient (C) of the composite material can be expressed as follows: According to Willis (1977), 61 the strain concentration factor A i (the ratio of inclusion strain to matrix strain) is defined as follows: where G is Green's function tensor, I is the fourth-order symmetric unit tensor, I ijkl = (δ ik δ jl + δ il δ jk )/2, and δ ik is the Kronecker symbol. Combining Equation (3) with Equation (4), the effective elastic coefficient (C) can be expressed as follows: where V i is the volume fraction of the ith inclusion; C i is the elastic coefficient of the ith inclusion. It should be noted that C appears on both sides of the equation and needs to be solved iteratively. For the self-consistent (SC) approximation of mixtures with N phases in Equation (5), Berryman 62-64 gave a general form as follows: where V i , K i , and μ i are the volume fraction, the bulk modulus, and the shear modulus of the ith component, respectively. The geometric factors P and Q are functions of the effective aspect ratio (α, the ratio of the minor axis to the major axis for elliptical shape) of the components, which could be calculated by the T-matrix method given in the Appendix. The superscript *i on P and Q refers to the factors for the ith component in the background medium with self-consistent equivalent moduli K sc and μ sc . To highlight the contributions of NGH, we choose a two-phase model to describe the NGH sediments: the solid mineral grains, liquid, and gas together is one phase as the matrix, and the NGH is the other phase as inclusion. The volume fraction of NGH is determined according to the hydrate saturation (S h ) and porosity. The two formulas in Equation (6) are coupled, and therefore, K sc and μ sc must be solved by simultaneous iteration. There are only two independent elastic constants for isotropic materials according to the elastic mechanics theory. Therefore, the effective Young's modulus (E sc ) and Poisson's ratio (v sc ) can be calculated once the effective bulk modulus and shear modulus of the NGH sediments are determined by the SC scheme in Equation (6) | Meso statistical damage constitutive model of NGH sediments It is assumed that the NGH sediments without considering damage are isotropic and linear elastic, so the stress tensor and strain tensor conform to Hooke's law as follows: where σ ij and ε ij are the stress tensor and strain tensor, respectively; λ is the Lame constant equal to (K sc -2μ sc /3); δ ij is Diagram for the Eshelby inclusion problem the Kronecker symbol, δ ij = 1 when i = j, and δ ij = 0 when i ≠ j. For conventional "drained" triaxial test, the pore pressure can be ignored, so the maximum principal stress (σ 1 ) is the axial compression pressure applied at the two ends of the core, and the middle principal stress is equal to the minimum principal stress (σ 2 = σ 3 ), which are both equal to the confining pressure (P c ). According to the stressstrain relationship in the principal stress space (Equation (9)), the triaxial stress-strain relationship of NGH sediments can be written as follows: where ε 1 is the axial strain; σ 3 is the confining pressure; Δσ = σ 1 -σ 3 is the axial deviation stress; E sc and v sc are the effective Young's modulus and Poisson's ratio, respectively, which are estimated by the SC scheme in section 2. According to the equivalent strain hypothesis of Lemaitre (1984), 65 the strain ε caused by the nominal stress σ applied on the damaged material is equal to that caused by the effective stress σ * applied on the nondamaged material, that is: where D is the damage variable; I is unit tensor; C * and C are the elastic moduli of the damaged and nondamaged material, respectively. The elastic moduli C can use its selfconsistent estimations. So when describing the mechanical behavior after damage, the equivalent stress after damage can be used to replace the nominal stress in the constitutive relation of nondamaged materials. Based on the theory of damage mechanics, the constitutive relation of NGH sediments considering damage can be obtained based on Equations (10) and (11) as follows: It should be noted that E sc and v sc here are same to those in Equation (10) before damage. Similar to other naturally occurring materials (eg, rocks), the mechanical properties of NGH sediments also show obvious randomness, which is difficult to be well described with a single variable or parameter. Therefore, we discrete the NGH sediments to infinite microelements, the strength of which is a random variable (F). Considering that the Weibull distribution 66 can cover a variety of random distributions by changing its shape parameter (m) and has been widely used in different engineering areas, here we choose the Weibull distribution to describe the statistical distribution of F for NGH sediments. The probability density function of F with Weibull distribution is as follows: where F is the random variable describing the Weibull distribution of the element strength; m and F 0 are parameters of Weibull distribution. Under the action of external temperature and pressure, the damage of NGH sediments grows gradually with the generation of microdefects within it. Assuming that the damage of NGH sediments is a continuous process and the damage variable D is a macrodescription of the cumulative effect of the microdefects, then the damage evolution equation can be obtained by integrating the probability density function within the interval of [0, F] as follows: Physically, the damage variable D is the fraction of the damaged microelements among the total number of elements (N t ), which varies between 0 and 1. D = 0 represents no damage or the initial state, and D = 1 represents the complete damage state, and 0 < D < 1 when partial damage occurs. By substituting Equation (14) into Equation (12), we obtain: This is a general form of the constitutive model of NGH sediments considering damage. When there is no damage at relatively low load condition (D = 0), it reduces to the classic constitutive model of linear elasticity as Equation (10). When damage occurs (0 < D ≤ 1), additional formula is required to determine F, F 0 , and m, which will be deduced in the following section based on the Drucker-Prager criterion. | Parameters of constitutive model for NGH sediments Drucker and Prager 67 proposed the Drucker-Prager criterion that can reflect the influence of average principal stress (or the first stress tensor invariant I 1 ) on the shear strength of geotechnical materials, which is an extended form of Mohr-Colomb failure criterion and has been widely used. [68][69][70][71] Here, for the microelements of NGH sediments with the Weibull-distributed strength of F, its Drucker-Prager failure criterion be expressed as follows: where I 1 is the first stress tensor invariant, J 2 is the second deviation stress invariant, α 0 is the material parameter associating Drucker-Prager and Mohr-Coulomb criteria in the main stress space, ϕ is the internal friction angle. σ 1 * , σ 2 * , and σ 3 * are the effective stresses. σ 1 , σ 2 , and σ 3 are nominal stress. According to the equivalent strain hypothesis (Equation (11)), we have: According to the generalized Hooke's law, the axial strain can be expressed as follows: So Equation (17) can be rewritten as follows: In the conventional triaxial compression test, confining pressure σ 2 =σ 3 , substituting Equation (19) into Equation (18), F can be expressed as follows: As a general observation in the conventional triaxial compression test of NGH sediments that the measured stress-strain variation trend changes at the peak point which is mathematically a stationary point for the stressstrain relation. Therefore, in Equation (15), the peak of deviation stress (Δσ max ) and the corresponding peak axial strain (ε max ) under specific confining pressure and hydrate saturation should meet: Based on Equations (15), (20), and (21), the parameters m and F 0 can be expressed as follows: and where F max is the microelement strength of hydrate sediment corresponding to the peak of the deviatoric stress. The Equations (6), (15), (20), (22), and (23) together give the constitutive model of NGH sediments considering the damage effect. We name this constitutive model as SC-SDCM in the following section for convenience. The model contains five parameters, which are Young's modulus E sc , Poisson's ratio v sc , the microelements strength of NGH sediments F, and Weibull distribution parameters F 0 and m. The elastic parameters (E sc and v sc ) of NGH sediments linked to the hydrate content are calculated by the self-consistent method, the Weibull distribution parameters F 0 and m are determined by the peak points of triaxial test curve, and F is calculated by Weibull distribution function. | MODEL EVALUATION To evaluate the model performance, we compare the predictions of our SC-SDCM with published experimental data and other models, respectively. The main procedures are as the followings. Such procedures and details are also shown in Figure 3. (i) Identify the matrix properties (E 1 , v 1 ) of NGH sediments according to the measurements of cores at different confining pressures when the hydrate saturation is zero. (ii) Estimate the effective elastic moduli (E sc , v sc ) of NGH sediments with different saturations (S h ) by SC scheme according to Equation (6). (iii) Based on the results of (ii) and the peak point (ε max , Δσ max ) of the measured stress-stain curves, calculate the parameters F 0 , m of the SC-SDCM according to Equations (22) and (23). (iv) Based on the results of (ii) and (iii), calculate the corresponding deviation stresses at different axial strains under different hydrate saturations (S h ) and confining pressures (P c ) according to the SC-SDCM and then compare the results of SC-SDCM with experimental data and other models. | Comparison with experimental data For the comparison with experimental data, we choose representative measurements on artificial cores by Masui et al 18 (Figures 4 and 5). To quantify the reliability of the SC-SDCM, we calculated the relative error of the predictions to the experiment measurements. The statistical distributions of relative errors are shown in Table B1 (see in Appendix B). For all the data in Figures 4 and 5, the mean values of the relative errors are 7.3% and 6.8% (Table B1), respectively. The predictions of SC-SDCM with relative error smaller than 5% make up to about 67.8% of all the measurements. To obtain cores with different hydrate saturations, the methane gas was injected into the skeletons at different confining pressures with temperature below −5.15℃ for 24 hours. After that, triaxial tests were carried out at pore pressure of 8 MPa and confining pressure of 8.5 MPa, 9 MPa, 10 MPa, and 11 MPa, respectively. And the axial load was applied at the stain rate of 0.1% per minute. According to effective stress principle of porous media, the effective confining pressure for these measurements are correspondingly 0.5 MPa, 1 MPa, 2 MPa, and 3 MPa, respectively. The effective elastic moduli and Weibull distribution parameters for the artificial NGH cores with different hydrate saturations by Miyazaki et al 26 are shown in Table 3. The comparison results between the predictions of SC-SDCM and experiments observations of Miyazaki et al 26 using Toyoura sand, No.7 silica sand, and No.8 silica sand are shown in Figures 6-8, respectively. In these figures, the measurements by Miyazaki et al 26 are shown with mark points, while the predictions of SC-SDCM are shown by solid curves. The results indicate that the SC-SDCM works well regardless of the sand type of the core skeleton. All the stress-strain curves (Δσ -ε 1 ) at different confining pressures and hydrate saturations are properly predicted by SC-SDCM with robust reliability. The relative errors were also calculated (See Table B1 in Appendix B). The performances of the SC-SDCM for the Toyoura sand cores ( Figure 6) and the No.7 sand cores ( Figure 7) are better than that for the No.8 sand cores (Figure 8). For NGH cores with skeleton of No.8 sand, the larger deviations between predictions of SC-SDCM and experimental measurements are mainly at relative low hydrate saturations (eg, S h = 0% and 27%-28%) when the effective confining T A B L E 2 Effective elastic moduli and Weibull distribution parameters for the artificial specimens in Masui et al (2005) 18 pressure is 3MPa ( Figure 8A, B). The average of the relative errors for all the data is about 10% (see details in Table B1 in the Appendix B). | Comparison with Yoneda et al (2019) Yoneda et al 30 Figure 9 where the measurements by Yoneda et al are shown with mark points, and the predictions of SC-SDCM are shown by solid curves. Again, the SC-SDCM gives satisfactory predictions for these naturally occurring samples as it works for the artificial cores. The mean value of the relative errors is only 8.1% (see details in Table B1 in the Appendix B), which is a quite good result considering the complexity of naturally occurring materials. | Comparison with other published constitutive models Besides the verifications of the SC-SDCM by experimental data above, we also compared its performance with other published models. We made two sets of comparisons based on the experiment observations of Masui et al 18 and Miyazaki et al, 26 respectively. That is: (i) Using the experiment measurements of Masui et al 18 at different hydrate saturations when the confining pressure is 1MPa, the predictions of our SC-SDCM and other four constitutive models, including Uchida et al, 37 Yan and Wei, 34 26 Pinkert and Grozic 42, and Uchida et al, 40 are compared. The results are shown in Figure 11. These comparisons indicate that the SC-SDCM uses relatively fewer model parameters with clear physical meanings and give similar or even better predictions than other models. As can be seen in Figures 10 and 11, all these constitutive models could basically capture the variation trends of the stress-strain curves (Δσ -ε 1 ) at different hydrate saturations. Compared to other models, the predictions of SC-SDCM are closest to the experiment | CONCLUSION In this paper, a new constitutive model, SC-SDCM, is developed for natural gas hydrate (NGH) sediments. The gas hydrate sediment is considered as a composite composed of matrix phase and inclusion phase. The matrix phase is conceptually equivalent elastic "solid" made of solid mineral grains, pore fluid, and gas with their volume fractions similar to the real state of NGH sediments, while the inclusion phase is the hydrate crystal. Under such work frame, the effective elastic parameters of the hydrate sediment are estimated by self-consistent method according to the equivalent inclusion theory of micromechanics. The mesoscopic element strength of the gas hydrate sediment is described by the Weibull statistical distribution and the damage theory of composite materials. And then combined with the Drucker-Prager failure criterion of microelement, the damage constitutive model of NGH sediments (SC-SDCM) was established. The SC-SDCM considers the influence of mineral composition, hydrate content, and effective pressure on the mechanical properties of NGH sediments, but it has not included the effect of other factors, such as pore structure, hydrate type, and hydrate distribution. The new SC-SDCM is tested to be reliable and robust within a wide range of hydrate saturations and stress conditions. The predictions of SC-SDCM are consistent with the triaxial experimental observations on artificial cores by Masui et al 18 The SC-SDCM established by the concept and method of composite materials has clear physical meaning and requires fewer model parameters which are easy to obtain based on experimental data or geophysical logging data. So, this new model is convenient for practical engineering application. Meanwhile, based on Eshelby equivalent inclusion theory, the relationship between the mesocomposition of natural gas hydrate sediments and their macroscopic mechanical properties is established by the SC-SDCM model. With the further research on the mesocomposition and structure of hydrate sediments, more factors effecting the mechanical properties of hydrate sediments will be understood better, such as pore structure, hydrate type, and hydrate distribution. Their effect on the mechanical properties can be involved in the estimation of the effective moduli of hydrate sediments. Then, the SC-SDCM model can be optimized and expanded to describe the mechanical properties of hydrate sediments better. F I G U R E 1 1 Comparison between SC-SDCM and three published models of Miyazaki et al, 26 Pinkert and Grozic (2014) 42 , and Uchida et al (2016) 40 based on the measurements by Miyazaki et al 26 using artificial NGH cores. The experimental data were measured at the hydrate saturations of 0%, 27%-34%, and 41%-45% when the confining pressure was 1 MPa, which are indicated by circle points here APPENDIX A The values of P and Q for ellipsoidal inclusions with arbitrary aspect ratio (α) are expressed as follows: Tensor T ijkl links up the strain in uniform far-field and the strain in the ellipsoidal inclusions (Wu, 1966). 63 Berrymann (1980) gave the calculation of relative scalars required for P and Q: where, with A, B, and R given by: where K i and i are the volume modulus and shear modulus of the ith phase inclusion, and v m is the Poisson's ratio of the matrix. The functions β and f are given as follows:
6,764.8
2021-09-14T00:00:00.000
[ "Environmental Science", "Engineering" ]
Effect of a Motion Artifact Correction System on Cone-Beam Computed Tomography Image Characteristics Objectives: Determine the effect of the motion correction system on cone-beam computed tomography (CBCT) image quality parameters, artifacts, and contrast-to-noise ratio (CNR) using different motion settings. Materials and methods: A customized phantom insert array was prepared using SEDENTEX CT IQ Phantom (Leeds Test Objects, Yorkshire, England) stabilized over a rotating electric turntable. Thirty baseline CBCT scans were acquired with standardized technique factors on the ProMax 3D (Planmeca, Helsinki, Finland) machine using combinations of different motion settings, including no motion, three- and six-degree motion, and with and without the use of a motion correction system. The standardized images were exported to ImageJ software. Image quality parameters, artifacts, and CNR values were evaluated and compared among the different acquisition settings. Results: The use of the motion correction system algorithm compared with the different motion settings showed a statistically significant difference for all the parameters (p<0.05) except for artifact values for six-degree motion (p<0.07). The effect of different motion settings on the parameters was not statistically significant. Conclusion: The use of a motion correction system, a proprietary algorithm-based system incorporated in the ProMax 3D CBCT unit, deteriorates the image quality characteristics evaluated in this in vitro study, namely artifact value and CNR. Its use in clinical settings might be limited to situations where patient motion is expected and appropriate head stabilization is not possible due to age or disease. Introduction Over the last decade, the use of cone-beam computed tomography (CBCT) for various diagnostic tasks in oral and maxillofacial imaging across the world has exponentially increased. It has become a standard for imaging for dental implant assessment, endodontic evaluation, various jaw lesions, etc. From a technical point of view, to produce radiographic images, a divergent pyramidal or cone-shaped source of ionizing radiation is directed through the area of interest onto an area on the X-ray detector that is present on the opposite side. The X-ray source and detector rotate around a rotation fulcrum fixed within the center of the region of interest. During the rotation, multiple (from 150 to more than 600) sequential planar projection images of the field of view (FOV) are acquired in a complete, or sometimes partial, arc. Once the basis projection frames have been acquired, data must be processed to create the volumetric data set. This process is called reconstruction. Reconstruction is a multistep process that is computationally complex. The most common reconstruction method is the Feldkamp algorithm, which is a modified filtered back projection method [1]. Despite the advantages that CBCT imaging provides, there are several limitations. Limited soft-tissue contrast resolution and the presence of multiple artifacts in the final reconstructed images are significant limitations. In CBCT imaging, an artifact may be defined as a "visualized structure in the reconstructed data that is not present in the object under investigation." Generally, artifacts are introduced by discrepancies between the actual physical conditions of the CBCT scanner's technical composition and the composition, position, and behavior of the object under consideration. Another reason is the simplified mathematical assumptions used for 3D image reconstruction [2]. unexplored. The general problem is quite easy to explain in terms of patient motion artifacts. Suppose an object moves during the scanning process. The reconstruction does not account for that movement since no information on the motion is integrated into the reconstruction process. When the object moves, the backprojection lines do not correspond to the lines where the attenuation is recorded. The back projection assumes a completely stationary geometry. Consequently, the intensities contained in the projections are back-projected under the static assumption. A correction compensating the actual movement effective in each of the projections would be required [2]. The relatively long acquisition times for some CBCT scans, ranging from 5 to 40 seconds, make it difficult for patients' ability to keep still during the examination, particularly children. Further, other reasons for patient movements, such as systemic diseases in elderly patients (e.g., Parkinson's disease), must be considered. If the artifacts are severe, leading to a significant loss in image quality, the CBCT image sections may not be interpretable, and scans may need to be acquired again, thereby increasing the total patient dose [3]. Minimal patient movement can cause motion artifacts, potentially degrading image quality. Recent in vivo studies showed that movements ≥0.5 mm take place in nearly 80% of CBCT examinations [4], and suggested that the presence of patient movement ≥3 mm had a significant impact on CBCT image quality and interpretability [5]. To overcome the drawbacks of patient motion, the literature has suggested that CBCT reconstruction algorithms should consider patient motion, i.e., provide automated correction of motion artifacts [3]. To the best of our knowledge, only two CBCT scanners, ProMax 3D (Planmeca, Helsinki, Finland) and X1 (3Shape, Denmark), have incorporated motion artifact correction systems to date. The X1 correction system requires a head tracking system, without which motion artifact correction cannot happen. However, the ProMax 3D uses a correction algorithm that does not require a head-tracking device and can be turned on and off before scan acquisition [6]. The motion artifact correction system in ProMax 3D is called CALMTM (Correction Algorithm for Latent Movement). In this in vitro study, we evaluated the effect of a motion correction system on CBCT image quality parameters, namely artifacts and contrast-to-noise ratio (CNR), using different motion settings. Materials And Methods Thirty baseline CBCT scans were acquired using the ProMax 3D (Planmeca, Helsinki, Finland) unit with standardized acquisition parameters: 10 cm × 10 cm field of view (FOV), 90 kVp, 12 mA, and 15 seconds exposure time. The voxel size selected was 150 µm. The SEDENTEXCT IQ Phantom, developed by Leeds Test Objects Ltd. (Boroughbridge, North Yorkshire, England), was used for image quality testing [7]. A customized phantom array with three inserts was prepared. The inserts for artifacts, the contrast resolution for aluminum, and the contrast resolution for delrin were used. The inserts were positioned such that they would be visible and centered in the images (Figure 1a device allowed for remote-controlled rotation at various degrees, as low as 1-degree to-and-fro motion. The phantom array was placed, centered, and stabilized over the rotating turntable for CBCT image acquisition (Figure 1a-1b). Six different scan combinations were acquired with the use of the CALM™ algorithm and degree of motion. The motions used with the combinations are given in Table 1. Number Scan combination 1 Without motion without motion correction system 2 Without motion with motion correction system 3 Three-degree motion without motion correction system 4 Three-degree motion with motion correction system 5 Six-degree motion without motion correction system 6 Six-degree motion with motion correction system TABLE 1: Six scan combinations with variations in motion and use of motion correction system The motion was induced after scan acquisition had begun, as soon as the gantry rotation was visually noted. For each combination, five baseline scans at the same acquisition setting were acquired to standardize the analysis. Scout images were acquired to verify the position of the inserts in the scan before the final acquisition. All thirty scans were acquired on the same day to prevent any change in the rotating turntable or phantom array position. As per the SEDENTEX CT IQ Phantom user manual, native axial images were exported using the thinnest slice thickness and smallest slice interval available [7]. The images were exported into ImageJ software (National Institute of Health, Bethesda, MD, USA) for analysis of artifacts, contrast resolution, and CNR. The analyses were as follows. For artifact (via beam hardening artifact) We selected the 100th slice from the scan's inferior aspect, which provided optimum visualization of the phantom inserts. The insert consists of a line of three 5.0 mm diameter rods of titanium suspended in a polymethyl methacrylate (PMMA) base [7]. A 25-mm line connecting the peripheral edges of the insert that was parallel and 2 mm away from the three rods was drawn and exported to ImageJ for plot profile analysis. A macro was created and used to maintain reproducibility. The peaks and valleys on the resulting gray value plot were examined, and maximum and minimum values were noted. The difference between these maximum and minimum values was defined as the artifact ( Figure 2). FIGURE 2: Axial CBCT image showing artifact insert and plot profile generated through ImageJ software indicating gray values for analysis For contrast-to-noise ratio We selected the 100th slice from the scan's inferior aspect, which provided optimum visualization of the phantom inserts. Two types of contrast-resolution inserts were used: aluminum and delrin. Each insert consists of 5.0, 4.0, 3.0, 2.0, and 1.0 mm-diameter rods suspended in a PMMA base [7]. For CNR evaluation, the two contrast resolution inserts and a control area in the PMMA base were selected. A 4 mm × 4 mm square centered area in the 5.0 mm diameter rods was selected in the aluminum contrast resolution insert, and a histogram was created using ImageJ. Similar to artifact evaluation, a macro was created for reproducibility. The mean and standard deviation values were recorded from the histogram. Subsequent mean and standard deviation values were recorded for the Delrin insert and control area ( Figure 3). FIGURE 3: Area analysis for contrast resolution and CNR for aluminum, delrin and control area with representative histograms with mean and standard deviation values The CNR was then calculated for both aluminum insert and delrin inserts using the formula [8]: The six scan combinations with five baseline scans per combination were evaluated for artifact and CNR values, and the mean values were used for each scan combination. Statistical analysis The statistical analysis was done using SPSS software version 23.0. A student t-test was used to evaluate the effect of the motion correction system on each motion setting separately. A one-way ANOVA was used to evaluate the three motion settings with and without the use of the motion correction system. All six setting combinations were compared with each other using the post-hoc Bonferroni test. For all statistical tests, a p-value of <0.05 was considered statistically significant. Results For each motion and motion correction system setting combination, five scan benchmark mean values for the artifact, CNR Aluminum , and CNR Delrin were calculated, as shown in Table 2. Effect of the use of the motion correction system for each motion setting (using a student t-test) Motion correction system The use of the motion correction system for each motion setting separately was statistically significant (p<0.05) for the different values of artifact, CNR Aluminum , and CNR Delrin except for the six-degree motion setting, where the artifact values were not statistically significant with or without the use of the motion correction system for scan acquisition (p>0.05) ( Table 3). Artifact value CNR Aluminum CNR Delrin Mean ± SD P-value Mean ± SD P-value Mean ± SD P-value Comparison of effect of each motion setting with and without the use of motion correction system (using One-way ANOVA) The values of artifact, CNR Aluminum , and CNR Delrin are not statistically significant for the different motion settings with and without the use of a motion correction system (p-values > 0.05) ( Comparison of all six scan acquisition settings with each other (using post-hoc Bonferroni test) The comparison of all scan acquisition settings with each other further emphasizes our results. There is a statistically significant difference in the use of the motion correction system compared with the different motion settings, except for the artifact values for the six-degree motion setting with and without using the motion correction system ( Discussion Although limited, several methods have been introduced for the detection, evaluation, and subsequent correction of patient motion in CBCT scans. Schulze et al. proposed an automated patient movement detection software based on optical flow theory, which is defined as the distribution of apparent velocities of movement of brightness patterns in an image. They concluded that optical flow theory is an efficient concept for the automated detection of patient motion on the projection images acquired during a CBCT scan [9]. Niebler et al. presented an iterative motion correction algorithm that allowed the patient's motion to be detected and taken into account during reconstruction. They observed improvements in image quality compared to uncorrected reconstructions, but the proposed algorithm is costly and computationally extensive [10]. Two CBCT manufacturers have incorporated motion correction systems into their machines, namely the X1 and ProMax 3D. Although their algorithms' exact functioning is not available due to proprietary restrictions, a basic understanding of how they function is available. X1 (3Shape, Denmark) requires a head tracking system for the detection of motion. The correction system is a two-step process. Step one is the tracking of the skull-robot movements using the head tracker apparatus integrated with the unit. The second step is an iterative reconstruction of the acquired projection images, incorporating the head tracker data [5]. The motion correction system CALM™ included in the ProMax 3D CBCT unit is understood to be an algorithmbased method, relying on optical flow measurements to predict movements as detected in the projection images. Due to the nature of the methods used to avoid motion artifacts, one could speculate that the latter somewhat compensates instead of correcting movement artifacts as optical flow measurements are directly related to brightness variation in the images. It may be affected by the inherent voxel value variation present in CBCT data sets, leading to inaccuracies in patient movement tracking [6]. Apart from our study, the effect of using the motion correction system has been shown in two studies. Spin Neto et al. compared the motion and motion-artifact-correction-system effects on apical periodontitis diagnosis in a human cadaver. They induced three motion types (3 mm movement with nodding, lateral rotation, and tremor type). They found that motion and motion correction systems had a significant effect on apical periodontitis diagnosis on CBCT images. The area under the curve (AUC) for images with the motion correction algorithm was 0.732-0.790, which was higher than the images without motion correction (AUC 0.541-0.70) [11]. In our study, the magnitude of the motion was slightly different. We induced motion using the electric turntable and increased the degree of motion while keeping the direction of the motion constant. In the study by Spin Neto et al. mentioned above [11], they changed the motion direction with different movement types while keeping the same 3 mm motion. Santaella et al. compared the effects of different motion and motion artifact correction systems on image quality and interpretation using aligned and lateral-offset detectors. They used several diagnostic tasks, including implant planning and furcation assessment for interpretation, and six different types of motion, and compared the effect of these motions on the observers' interpretation abilities using different CBCT units, including ProMax 3D and X1, which had motion correction systems. They concluded that interpretation ability was improved with a motion correction system enabled for aligned detectors but was less effective with lateral off-set detectors [6]. Our study did not change the detector alignment for scan acquisition; movements were of three types based on the degree of motion rather than direction, and they only used a single motion correction system instead of two in their study. Several other studies have been done, showing the effect of motion artifacts on images or different patient positioning on motion artifacts. Nardi et al. compared the effect of different head movements on image quality. They concluded that not all motion types affect the image quality equally, and movements of short duration and gradual movements affect the image differently [12]. Keris et al. did a retrospective study comparing the effect of motion on images with different patient positioning and found no significant impact of motion positioning [13]. However, while evaluating the effect of different types of motion on images, both of these studies are subjective and do not consider the possibility of motion correction systems. To the best of our knowledge, this is the first study comparing the effect of the proprietary motion correction system CALM™ on the image quality characteristics, artifacts, and CNR values concerning different motion settings in an in-vitro setting using a CBCT phantom. It can be inferred from our study that, with the motion correction system enabled, there is a detrimental effect on the artifact values and CNR for both high and low-contrast objects. The image quality differences caused by the induction of motion at different levels were not statistically significant; however, they showed gradual image degradation. Despite promising results, our study has several limitations. Although only one direction of motion was induced, its magnitude increased up to 6 degrees to and fro. In patients, especially in children, motion can occur in several directions, including anteroposterior or lateral; old patients with systemic diseases show tremor-like movements as well. Our study was performed in a controlled setting, and patient motion can affect different characteristics, including differences in scan acquisition times, differences in the age of the patient, and differences in projection geometry, including changes in the field of view of detector alignment. One of the important characteristics is spatial resolution, and although we incorporated it during the pilot project of this study, it was a visual assessment, and further studies are advocated using detailed modulated transfer function (MTF) calculations. With these limitations, further studies are advocated for evaluating these promising motion correction systems. Further studies should include spatial resolution and different types of motion and incorporate them into clinical applications. It should also be noted that the motion correction system can only be applied before scan acquisition, which limits its use in situations where excessive motion is expected to happen during scan acquisition. Our results indicate judicious use of the motion correction system, as it can have a detrimental effect on the CNR and artifact values. Several factors affect patient motion during CBCT image acquisition, and the most important factors are scan acquisition time and patient head stabilization. Scan acquisition times can be reduced to a certain extent by reducing scan resolution, the number of basis images, or rotational arc, as permitted by the diagnostic needs of each scan. The second method is head stabilization, which can be performed using headrests, chin cups, or head restrainers. If optimum patient head stabilization and appropriate adjustment of scan acquisition times are possible, the motion correction system CALM™ needs to be carefully used as it may cause degradation of some image quality parameters. Conclusions The use of the motion correction system, a proprietary algorithm-based system incorporated in the ProMax 3D CBCT unit, deteriorates the image quality characteristics evaluated in this in vitro study, namely artifact value and CNR. Its use in clinical settings might be limited to situations where patient motion is expected and appropriate head stabilization is not possible due to age or disease. However, there are other technical and clinical characteristics that need to be taken into consideration, and further studies are required. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
4,555.6
2023-02-01T00:00:00.000
[ "Medicine", "Physics" ]
Optical imaging detects metabolic signatures associated with oocyte quality Abstract Oocyte developmental potential is intimately linked to metabolism. Existing approaches to measure metabolism in the cumulus oocyte complex (COC) do not provide information on the separate cumulus and oocyte compartments. Development of an assay that achieves this may lead to an accurate diagnostic for oocyte quality. Optical imaging of the autofluorescent cofactors reduced nicotinamide adenine dinucleotide (phosphate) [NAD(P)H] and flavin adenine dinucleotide (FAD) provides a spatially resolved indicator of metabolism via the optical redox ratio (FAD/[NAD(P)H + FAD]). This may provide an assessment of oocyte quality. Here, we determined whether the optical redox ratio is a robust methodology for measuring metabolism in the cumulus and oocyte compartments compared with oxygen consumption in the whole COC. We also determined whether optical imaging could detect metabolic differences associated with poor oocyte quality (etomoxir-treated). We used confocal microscopy to measure NAD(P)H and FAD, and extracellular flux to measure oxygen consumption. The optical redox ratio accurately reflected metabolism in the oocyte compartment when compared with oxygen consumption (whole COC). Etomoxir-treated COCs showed significantly lower levels of NAD(P)H and FAD compared to control. We further validated this approach using hyperspectral imaging, which is clinically compatible due to its low energy dose. This confirmed lower NAD(P)H and FAD in etomoxir-treated COCs. When comparing hyperspectral imaged vs non-imaged COCs, subsequent preimplantation development and post-transfer viability were comparable. Collectively, these results demonstrate that label-free optical imaging of metabolic cofactors is a safe and sensitive assay for measuring metabolism and has potential to assess oocyte developmental competence. Introduction Oocyte developmental competence; a term defined as the capability of the oocyte to resume meiosis, undergo fertilization and preimplantation embryo development, implant, and result in a healthy offspring [1]. The developmental competency of an embryo is highly dependent on the oocyte from which it is derived. Thus, selection of an oocyte with high developmental potential is a possible route to improve the success rate of in vitro fertilization (IVF) [2]. Morphological assessment of the oocyte remains the primary method of evaluation in the clinic, despite being subjective and inaccurate in predicting competency [2][3][4][5]. Much effort has been invested in understanding the metabolism of the cumulus oocyte complex (COC) [6][7][8][9][10][11] and how various metabolic pathways impact oocyte quality (for review, see [2]). Intra-oocyte ATP is an important determinant of oocyte quality, with a deficit in ATP associated with reduced developmental potential [12][13][14]. The oocyte relies predominantly on oxidative phosphorylation to generate mitochondrial-derived ATP, and via gap junctions, cumulus cells contribute to the pool of intra-oocyte ATP via glycolysis: a metabolic pathway not active in the oocyte [2,15]. Many studies have attempted to find metabolic biomarkers associated with oocyte quality, including metabolites within follicle fluid [16], cumulus cell gene expression [1,17,18], and profiling of "spent medium" following in vitro maturation (IVM) [16,[19][20][21][22]. However, these methods have not been routinely implemented in the clinic. As the oocyte and cumulus cells utilize different metabolic pathways to generate intraoocyte ATP, these approaches may fail to accurately predict oocyte quality as they measure metabolism of the whole COC and do not provide metabolic signatures of the separate cumulus and oocyte compartments [2]. Thus, development of a non-invasive assay that provides spatial information on metabolic activity in the COC may lead to an accurate diagnostic for oocyte quality. Mitochondrial oxidative phosphorylation is the primary pathway for generating cellular ATP and is typically measured via oxygen consumption: the benchmark measurement in our field [23][24][25][26]. However, this measurement is unable to provide spatial information on individual cells where heterogeneity likely exists, as is the case for the COC [2]. To complement this, the recent use of label-free optical imaging of intracellular autofluorescence to non-invasively assess metabolism [27][28][29][30] is potentially very powerful, allowing for spatial information within the COC to be recorded. A large proportion of cellular autofluorescence is derived from the metabolic co-factors reduced nicotinamide adenine dinucleotide (NADH), reduced nicotinamide adenine dinucleotide phosphate (NADPH), and flavin adenine dinucleotide (FAD). Due to their near identical spectral properties, NADH and NADPH are collectively referred to as NAD(P)H [31]. These co-factors can be used to calculate the optical redox ratio (FAD / [FAD + NAD(P)H]), which has been used in other cell types as a measure of oxidative phosphorylation [32]. In the field of reproductive biology, optical imaging of these endogenous fluorophores using laser scanning confocal microscopy has been used to measure metabolism in oocytes and preimplantation embryos [33][34][35][36]. However, previous work has not assessed the accuracy of the optical redox ratio in measuring dynamic metabolic changes in the cumulus and oocyte compartments of the COC. This is particularly important as the captured fluorescence may include signatures from fluorophores other than FAD and NAD(P)H [27]. In this study, we assessed whether the optical redox ratio is a robust method to measure dynamic metabolic changes in the oocyte and cumulus cell compartments of the COC. We did this by comparing the optical redox ratio for each of the cell types with oxygen consumption of the whole COC. These were measured at basal levels and in response to modulators of oxidative phosphorylation. Oxygen consumption was measured using extracellular flux analysis which has been utilized in various cell types including cancer [37][38][39] and reproductive cell types [40][41][42]. We also determined whether label-free confocal imaging could detect metabolic differences in COCs with poor developmental potential. Following the demonstration that confocal imaging was robust and able to detect metabolic variance in COCs with poor developmental potential, we validated these results using a clinically appropriate imaging modality, namely hyperspectral microscopy. This requires a low energy dose, and thus has a very low potential for phototoxicity resulting from imaging. The safety of this imaging approach was examined by recording embryo development following IVF as well as postnatal outcomes following transfer of resultant embryos to recipients. In this study, we show the capacity of label-free optical imaging to separately measure metabolism in the oocyte and cumulus cell compartments of the intact COC, and the potential of this approach to safely assess oocyte developmental competence. Materials and methods All reagents were purchased from Sigma Aldrich (St. Louis, MO, USA) unless stated otherwise. Animal ethics Female (21-23 days) and male (6-8 weeks old) CBA × C57BL/6 first filial (F1) generation (CBAF1) as well as female (6-8 weeks old) Swiss mice were obtained from Laboratory Animal Services (LAS University of Adelaide, SA, Australia) and maintained on a 12 h light:12 h dark cycle with rodent chow and water provided ad libitum. All experiments were approved by the University of Adelaide's Animal Ethics Committee and were conducted in accordance with the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes. Media All gamete and embryo culture took place in media overlaid with paraffin oil (Merck Group, Darmstadt, Germany) in a humidified atmosphere of 5% O 2 , 6% CO 2 with a balance of N 2 at 37 • C unless stated otherwise. All media were preequilibrated for at least 4 h prior to use. All handling procedures were performed on microscopes fitted with warming stages calibrated to maintain the media in dishes at 37 • C. Collection of immature cumulus oocyte complexes Mice were injected subcutaneously with 5 IU equine chorionic gonadotrophin (eCG, Braeside, VIC, Australia). At 44 h post-eCG, mice were culled by cervical dislocation and ovaries collected in warmed handling medium. The COCs were isolated from ovaries by puncturing follicles using a 29-gauge × 1 / 2 in. insulin syringe with needle (Terumo Australia Pty Ltd., NSW, Australia). Isolated immature COCs were used for IVM or for imaging using two-channel laser scanning confocal microscopy to measure the optical redox ratio. In vitro maturation Immature COCs were cultured in groups of 20 in drops of 100 μL IVM medium overlaid with paraffin oil. Mature COCs (12 h post IVM) were used for imaging using twochannel laser scanning confocal microscopy to measure the optical redox ratio. A separate cohort of COCs were matured in the absence or presence of etomoxir (100 μM; 16 h) [9]. Maturation of COCs occurred within 20% O 2 , 6% CO 2 with a balance of N 2 at 37 • C. Following IVM, 12 h COCs were treated with metabolic inhibitors/uncoupler and imaged using two-channel scanning confocal microscopy. COCs matured for 16 h were either imaged (two-channel scanning confocal microscopy or hyperspectral microscopy) or fertilized in vitro. In vitro fertilization and embryo culture One hour prior to IVF, male mice with proven fertility were culled by cervical dislocation with the epididymis and vas deferens collected in Research Wash Medium. Spermatozoa were released from the vas deferens and caudal region of the epididymis by blunt dissection in 1 mL of Research Fertilization Medium and allowed to capacitate for 1 h in a humidified atmosphere of 5% O 2 , 6% CO 2 with a balance of N 2 at 37 • C. Mature COCs were then co-cultured with capacitated spermatozoa (35 000 sperm/mL) for 4 h at 37 • C in a humidified atmosphere of 5% O 2 , 6% CO 2 with a balance of N 2 [48,49]. The resulting presumptive zygotes were cultured in groups of 10-12 in 20 μL drops of Research Cleave Medium overlaid with paraffin oil in a humidified atmosphere of 5% O 2 , 6% CO 2 with a balance of N 2 at 37 • C. Embryos were assessed for on-time morphological development on day 2 (2-cell; 24 h post-IVF) and day 5 (blastocyst-stage; 96 h post-IVF). The rate of development to the 2-cell and blastocyst stages was calculated from the initial number of oocytes. Two-cellstage embryos were identified by the presence of two regular blastomeres of equal size, while blastocysts were identified by the presence of a blastocoel cavity ≥ two-thirds the size of the embryo; or expanded; or hatching. Use of metabolic inhibitors Mitochondrial inhibitors (oligomycin, carbonyl-4-phenylhydrazone (FCCP) and Rotenone/antimycin A (Rot/AA)) used for this study were from the Seahorse XF Cell Mito Stress Test Kit (Agilent Technology, CA, USA). Inhibitors were dissolved as per manufacturer's instructions in either MEM-E (for experiments where metabolic cofactors were measured by optical imaging) or Seahorse XF DMEM medium (measurement of oxygen consumption rate (OCR) by extracellular flux assay; Agilent Technology, CA, USA) and stored in −80 • C. One hour prior to imaging or extracellular flux analysis, inhibitors were diluted to required concentrations with prewarmed MEM-E or DMEM. The optimal dose for each mitochondrial inhibitor/ uncoupler was determined. Doses chosen for analysis (Supplementary Figure SS1) were based on manufacturers' instructions and prior literature [41]. The final concentrations used for this study were 2.0 μM oligomycin; 1.0 μM FCCP, and 2.5 μM Rot/AA, as these elicited an appropriate response in oxygen consumption in COCs (Supplementary Figure SS1). Measurement of oxygen consumption rate in immature cumulus oocyte complexes using extracellular flux analysis The base medium used for extracellular flux analysis was Seahorse XF DMEM medium, supplemented with 1 mM pyruvate (Agilent Technology, CA, USA), 2 mM glutamine (Agilent Technology, CA, USA) and 10 mM glucose (Agilent Technology, CA, USA). Immature COCs were isolated in prewarmed handling medium as described above and placed into wells at a density of 20 COCs/well. One hour prior to the assay, COCs were washed three times in Seahorse XF DMEM before being replaced with fresh Seahorse XF DMEM and cultured for another hour in a non-CO 2 gassed, humidified incubator at 37 • C. The Seahorse Bioscience XF analyzer and Mito Stress Test Kit (Agilent Technology, CA, USA) were used according to the manufacturer's instructions. The sensor containing fluxpak (Agilent Technology, CA, USA) was hydrated and incubated overnight at 37 • C in a non-CO 2 gassed humidified incubator. The sensor containing fluxpak was calibrated for approximately 15 min as per manufacturer guidelines. Upon completion, the pre-warmed cell plate containing immature COCs was loaded into the machine. The OCR was analyzed using a protocol involving a 12 min equilibration period and alternating between a 3 min measurement period and a 3 min re-equilibration period. During the measurement period, the sensor containing the probe was lowered down, creating an airtight 2.3 μL microenvironment. The output of extracellular flux analysis was given as OCR in pmol/min/well. In a separate cohort, the basal OCR was measured in immature COCs cultured in either Seahorse XF DMEM or MEM-E to ensure that the base medium did not affect oxygen consumption (Supplementary Figure SS2). This was performed as the goal of the study was to use optical imaging of autofluorescent metabolic co-factors to assess oocyte developmental competence and the need to culture COCs in standard IVM medium (MEM-E). Measurement of metabolic co-factors NAD(P)H and FAD using two-channel laser confocal microscopy Immature or mature (12 h) COCs were imaged. Images were acquired at baseline (basal), followed by sequential exposure to oligomycin, FCCP and Rot/AA (15 min between each treatment). Following IVM in the absence or presence of etomoxir (16 h), mature COCs were imaged. Imaging of COCs occurred within a 35 mm glass-bottom dish (Ibidi, Martinsried, Planegg, Germany) in 2 μL of Research Wash Medium, overlaid with paraffin oil. The autofluorescence intensity indicative of co-enzymes NAD(P)H and FAD content was recorded on an Olympus Fluoview FV10i confocal microscope (Olympus, Tokyo, Japan). Cells were excited at a wavelength of 405 nm (emission detection bandwidth: 420-450 nm) for NAD(P)H (referred to as the NAD(P)H channel), and excited at a wavelength of 473 nm (emission detection bandwidth: 490-590 nm) for FAD (referred to as the FAD channel) as described in ref [35]. Image acquisition occurred at 60× magnification, numerical aperture equal to NA = 1.4, with a single z-plane chosen for each COC where the oocyte diameter was largest. Imaging parameters were kept constant between replicate experiments. Fluorescence intensity was measured using ImageJ software (National Institute of Health). For oocyte intensity measurements, a region of interest was created that encompassed the entire oocyte for each COC. Quantification of intensity within the cumulus cell compartment was performed by creating two regions interest of equal size and placing these on opposing sides of, and adjacent to, the oocyte. A mean of these two regions of interest was calculated for each COC. The optical redox ratio was calculated using the intensity of the FAD channel divided by the sum of the intensity of NAD(P)H and FAD channels (FAD / (NAD(P)H + FAD), which reflects the activity of the mitochondrial electron transport chain and therefore indicates the dynamic changes of cellular metabolism [30]. Hyperspectral autofluorescence and brightfield imaging In this work, the hyperspectral microscopy system (Quantitative Pty Ltd, Mount Victoria, NSW, Australia) was built by adapting a standard epifluorescence microscope (Nikon Eclipse TiE, 40× objective, NA = 1.3), fitted with a multi-LED light source (Prizmatix Ltd, Givat-Shmuel, Israel). These low-power LEDs provided 56 spectral channels: 21 excitation wavelength ranges and 3 emission wavelength filters, covering excitation wavelengths from 348 to 649 nm and emission wavelengths from 450 to 715 nm (for details of spectral channels, see Supplementary Table SS1). The fluorescence of native endogenous fluorophores found within cells was captured by a 40× objective imaging onto a digital camera C1140, OCRA Flash 4.0 (Hamamatsu, Shizuoka, Japan) using all 56 spectral channels. Image acquisition times of up to 3 s per channel were used, with multiple averaging (typically 1-3 times) to optimize image quality and minimize any potential photo-damage to the cells in each channel. Mature COCs were imaged on a microscope slide and mounted using a 0.12 mm Secure Seal spacer (Molecular Probes, Invitrogen). Blastocysts were imaged on glass bottom confocal dishes (Ibidi, Martinsried, Planegg, Germany) containing 2 μL of Research Wash Medium overlaid with paraffin oil. Hyperspectral microscopy images were taken by adjusting the input light beam to specifically focus on the equatorial plane of the oocyte for COCs (i.e., the widest diameter) or the inner cell mass for individual blastocysts. Analysis of hyperspectral microscopy data To avoid errors in quantification, images were digitally processed first using the custom-made GUI_Preprocess v3.12 software [50]. This is required to remove image artifacts and noise, such as background fluorescence, Poisson's noise, dead or saturated pixels, and illumination curvature across the field of view, as described in detail in previous work [51][52][53]. At the beginning of each experiment, two calibration images were captured using the hyperspectral system: a "background" reference image of a culture dish with medium only, and another with calibration fluid only. The "background" reference image was included and subtracted from all images to remove any background signals. The microscope system was calibrated with a mixture of 50 μM NADH and 10 μM riboflavins whose spectrum spans all spectral channels and the concentration used sufficient for visualization of their autofluorescence. The excitation and emission spectra of this calibration fluid were measured using a spectrometer (FLS1000 Photoluminescence Spectrometer) and imaged on the hyperspectral microscope across all spectral channels. Based on the known spectral properties of NAD(P)H and FAD [32,54], we selected channels 1 and 2 (NAD(P)H channels) and channels 24 and 25 (FAD channels) (Supplementary Table SS1) to capture their autofluorescence. The COCs and blastocyst-stage embryos were manually segmented using the brightfield image to create a region of interest [49]. For COCs, the region of interest was as described above for confocal microscopy. For blastocyst-stage embryos, the region of interest was manually drawn around the inner cell mass. The intensity of NAD(P)H and FAD was quantified using custom-made GUI_Preprocess v3.12 software [50]. Embryo transfer and postnatal outcomes Mature COCs (16 h) were divided into two groups: those imaged using the hyperspectral microscope and those that were not. Both groups of COCs were then fertilized in vitro and allowed to develop to the blastocyst-stage. Embryos were assessed for on-time morphological development on day 2 (2cell; 24 h post-IVF) and day 5 (blastocyst-stage; 96 h post-IVF). Resultant blastocyst-stage embryos were then vitrified. Embryo vitrification and warming were performed as previously described [49]. On the day of transfer, blastocyststage embryos were warmed 2 h prior to embryo transfer to allow sufficient time for recovery. Blastocyst-stage embryos were transferred into the uterine horns of pseudopregnant Swiss mice 2.5 days post-coitum. Embryo transfers were performed on mice under anesthesia with 1.5% isoflurane as previously described [49]. Eight to twelve morphologically normal, expanded blastocysts were transferred to each uterine horn. Mice were monitored daily. The number of pups for each recipient was recorded on delivery. Pregnancy rate was calculated from the number of pregnant recipients over the total number of pseudopregnant females. Live birth rate was calculated from the number of live pups over the number of embryos transferred (non-pregnant mice were excluded from this analysis). At post-natal day 21, offspring were weaned, accessed for gross facial deformities and weight recorded. Statistical analyses Wave software (Agilent Technology, CA, USA) was used to determine OCR in pmol/min/well. Statistical analyses were carried out using GraphPad Prism Version 9 for Windows (GraphPad Holdings LLC, CA, USA) except for weight of offspring data where Statistical Package for Social Science (SPSS) version 28.0.1.0 software was used. Data were subjected to normality testing using the D'Agostino-Pearson Omnibus normality test prior to statistical analysis. Normally distributed data were analyzed either by an unpaired Student t-test or an ordinary one-way analysis of variance (ANOVA) with Holm-Sídák post-hoc test. For data that did not follow a normal distribution a Mann-Whitney test or Kruskal-Wallis test with Dunn post-hoc test were used. Continuous data are presented as mean ± SEM. Weight of offspring at weaning is presented as estimated marginal mean ± SEM and was analyzed by linear mixed model with litter size as a covariate. Categorical data are described as percentages and compared using the Fisher exact test. Details of statistical tests are stated in the figure legends. Statistical significance was set at a P-value <0.05. Results We evaluated the robustness of the optical redox ratio in detecting dynamic metabolic changes in the oocyte and cumulus cell compartments of the COC by direct comparison with the OCR. Assessments were made at baseline (basalno drug treatment) and following the sequential addition of mitochondrial inhibitors/uncoupler. These drugs act by inhibiting specific components of the electron transport chain (Supplementary Figure SS4) [55]. Oligomycin inhibits ATP synthase providing an indication of the proportion of oxygen used for ATP production. Thus, the OCR will decrease upon exposure to this compound. Based on its mode of action, we hypothesized that oligomycin exposure will lead to an increase in NAD(P)H, a decrease in FAD and thus, a decrease in optical redox ratio. FCCP is a mitochondrial uncoupler and acts by interfering with the proton gradient. This results in maximum oxygen consumption and as such, the OCR increases. We hypothesized that the addition of FCCP will lead to decreased NAD(P)H and increased FAD and optical redox ratio. Lastly, rotenone and antimycin A (Rot/AA) when added together block complexes I and III, respectively shutting down the electron transport chain entirely and decreasing the OCR. In this instance, we hypothesized that the addition of Rot/AA will lead to increased NAD(P)H, and decreased FAD and optical redox ratio. We tested these hypotheses in the oocyte and cumulus cell compartments of immature and mature COC. Optical redox ratio detects dynamic changes in immature COCs that correlate with oxygen consumption rate We determined whether the optical redox ratio detects dynamic metabolic changes within immature COCs in response to mitochondrial inhibitors/uncoupler. We evaluated metabolic changes within the oocyte and cumulus cell compartments separately ( Figure 1A). Compared to basal levels, there was a reduction in the intensity of NAD(P)H within the oocyte in response to oligomycin, although this did not reach statistical significance ( Figure 1B). There was a significant reduction in FAD within the oocyte following the addition of oligomycin compared to basal levels ( Figure 1C). The changes in NAD(P)H and FAD in response to oligomycin yielded no impact on the optical redox ratio of oocytes compared to basal levels ( Figure 1D). Treatment with FCCP resulted in a significant increase in NAD(P)H, FAD, and optical redox ratio compared to levels observed in the presence of oligomycin ( Figure 1B-D, respectively). Following addition of Rot/AA, there was no change in the levels of NAD(P)H but a significant decrease in FAD and optical redox ratio compared to levels seen in the presence of FCCP ( Figure 1B-D, respectively). In cumulus cells, the addition of oligomycin caused no changes in NAD(P)H, FAD, or the optical redox ratio compared to basal levels ( Figure 1E-G, respectively). The addition of FCCP caused no change in the intensity of NAD(P)H but did result in a significant increase in FAD and the optical redox ratio compared to oligomycin ( Figure 1E-G, respectively). Subsequent addition of Rot/AA yielded no change in the levels of NAD(P)H compared to FCCP but was significantly higher compared to basal levels ( Figure 1E). For both FAD and the optical redox ratio, Rot/AA caused a significant decrease compared to FCCP levels ( Figure 1F and G, respectively). Excitingly, when comparing the optical redox ratio with OCR, we observed that changes in optical redox ratio were consistent with OCR for the oocyte for all mitochondrial inhibitors/uncoupler ( Figure 1H, black dashed line vs red line). The optical redox ratio for cumulus cells was consistent with changes in OCR with the exception of the response to oligomycin ( Figure 1H; black solid line vs red line). Optical imaging of metabolic cofactors in mature COCs detect metabolic changes associated with oocyte developmental competence The above data demonstrated that label-free optical imaging of metabolic co-factors and subsequent calculation of the optical redox ratio were robust measures of metabolism in the immature oocyte. We next determined whether this approach was equivalently accurate for the mature COC (expanded, meiotically mature). In the oocyte, the addition of oligomycin yielded no impact on the intensity of NAD(P)H compared to basal levels ( Figure 2A). Compared to basal levels, there was a decrease in FAD and optical redox ratio, although this was not statistically significant ( Figure 2B and C, respectively). The addition of FCCP resulted in a significant decrease in the NAD(P)H and increase in the FAD compared to levels observed in the presence of oligomycin (Figure 2A and B, respectively). The changes in NAD(P)H and FAD in response to FCCP led to a significant increase in optical redox ratio compared to oligomycin ( Figure 2C). Following addition of Rot/AA, there was no change in the intensity of NAD(P)H (Figure 2A) but there was a significant decrease of FAD and optical redox ratio compared to FCCP ( Figure 2B and C, respectively). In cumulus cells, there was no impact on NAD(P)H, FAD signal or optical redox ratio in response to oligomycin compared to basal levels ( Figure 2D-F). The addition of FCCP caused no change in the level of NAD(P)H ( Figure 2D) but resulted in a significant increase in FAD and the optical redox ratio compared to oligomycin ( Figure 2E and F, respectively). Similar to FCCP, Rot/AA had no impact on the level of NAD(P)H ( Figure 2D) but led to a significant decrease in the intensity of FAD and optical redox ratio ( Figure 2E and F, respectively). Toward determining whether this or similar optical approaches could detect metabolic changes associated with oocyte developmental potential, we utilized a well-described model of poor oocyte quality [9,56]. COCs were matured in vitro in the absence or presence of etomoxir, which is known to inhibit fatty acid oxidation and result in decreased oocyte developmental potential [9]. The intensity of metabolic cofactors NAD(P)H and FAD was quantified in the oocyte and cumulus cells separately ( Figure 2G-L). Compared to control, etomoxir treatment during IVM significantly reduced the intensity of NAD(P)H ( Figure 2G and J) and FAD ( Figure 2H and K) in both the oocyte ( Figure 2G and H) and cumulus cells ( Figure 2J and K). The presence of etomoxir during IVM did not alter the optical redox ratio for oocytes ( Figure 2I) or cumulus cells ( Figure 2L). Additionally, we confirmed that maturation in the presence of etomoxir negatively affects oocyte developmental potential. While maturation in the presence of etomoxir did not affect fertilization rate (control: 90.2 ± 2.5% vs etomoxir: 72.9 ± 8.8%, data not shown), it did result in significantly fewer embryos reaching the blastocyst stage of development ( Figure 2M). Hyperspectral microscopy detects metabolic changes associated with oocyte quality that persist in resultant blastocysts The use of laser-scanning confocal microscopy as a clinical measure of oocyte quality is hampered by the high laser energy dose required for imaging, likely resulting in photodamage [29]. Consequently, we next investigated whether hyperspectral microscopy could detect analogous changes in autofluorescence in the COC. Hyperspectral microscopy was chosen due to its 100-fold lower energy dose requirement compared to laser-scanning confocal imaging [35,57]. Channels 1 and 2 were used to quantify the intensity of NAD(P)H, whereas Channels 24 and 25 were used to capture FAD (as described in Materials and Methods). Using the same model of poor oocyte quality, hyperspectral microscopy showed similar changes to NAD(P)H and FAD in both the oocyte and cumulus cells in Hyperspectral microscopy detects metabolic changes associated with oocyte quality. COCs were matured in vitro in the absence or presence of etomoxir. Etomoxir is a known inhibitor of fatty acid metabolism (β-oxidation) and negatively affects oocyte developmental competence (see Figure 2M). Matured COCs were imaged using hyperspectral microscopy. Autofluorescence intensity was quantified for the oocyte (A-D) and cumulus cells (E-H) in hyperspectral channels that match the spectral properties of NAD(P)H: Channel 1 (A and E) and Channel 2 (B and F); and FAD: Channel 24 (C and G) and Channel 25 (D and H). Data were analyzed by a two-tailed unpaired Student t-test (A-C and H) or Mann-Whitney test (D-G). Data presented as mean ± SEM, three independent experimental replicates; n = 12 for control COCs, n = 16 for etomoxir-treated COCs. * P < 0.05, * * P < 0.01. Figure 2M). Following in vitro maturation, COCs were fertilized in vitro and developed to the blastocyst stage in the absence of etomoxir. Embryos were imaged using hyperspectral microscopy. Autofluorescence intensity within the inner cell mass was quantified for hyperspectral channels that matched the spectral properties of NAD(P)H: Channel 1 (A) and Channel 2 (B); and FAD: Channel 24 (C) and Channel 25 (D). Data were analyzed by either a two-tailed unpaired Student t-test (C and D) or Mann-Whitney test (A and B). Data presented as mean ± SEM, three independent experimental replicates; n = 24 for control and n = 29 for blastocysts developed from control and etomoxir-treated COCs, respectively. * * P < 0.01, * * * P < 0.001. response to etomoxir. Compared to control, there was a significant decrease in fluorescence in both the oocyte and cumulus cells following IVM in the presence of etomoxir in Channel 1 ( Figure 3A and E, respectively) but no difference was seen in Channel 2 ( Figure 3B and F, respectively). For both FAD channels (Channels 24 and 25) there was a significant decrease in autofluorescence in the oocyte and cumulus cells following IVM in the presence of etomoxir compared to control (oocyte: Figure 3C and D; cumulus cells: Figure 3G and H). To determine whether metabolic changes detected in etomoxir-treated oocytes persisted in resultant embryos, COCs matured in the absence or presence of etomoxir were fertilized in vitro and allowed to develop to the blastocyst stage. The intensity of NAD(P)H (Channels 1 and 2; Figure 4A and B) and FAD (Channels 24 and 25; Figure 4C and D) was significantly lower in the fetal cell lineage (inner cell mass) of blastocyst-stage embryos that developed from etomoxir-treated COCs compared to those matured in control conditions. Safety of hyperspectral imaging To assess whether photodamage had occurred in response to imaging, untreated mature COCs were either not imaged (non-imaged) or imaged using hyperspectral autofluorescence microscopy (imaged). Both groups of COCs were then fertilized in vitro and allowed to develop to the blastocyststage. Imaging of COCs did not affect fertilization rate (nonimaged: 90.15 ± 1.71% vs imaged: 84.49 ± 2.76%, data not shown) or subsequent development to the blastocyst stage (non-imaged: 63.47 ± 5.18% vs imaged: 61.87 ± 4.78%, data not shown). We next examined whether hyperspectral imaging of the COC altered the potential of the subsequent embryo to implant and result in the birth of a healthy live off-spring. Imaged COCs resulted in the birth of live pups ( Figure 5A). There were no significant differences in pregnancy or live birth rate (P > 0.05; Supplementary Table SS2). Imaged versus non-imaged COCs resulted in offspring with similar, and not significantly different, weights at weaning ( Figure 5B; Supplementary Table SS3). Similarly, there was no difference in weight at weaning according to sex (Supplementary Figure SS3). No gross facial deformities were noted across treatment groups. Discussion Most metabolism assays are limited in that they assess the whole COC and fail to provide spatial information on the oocyte and cumulus cell compartments [2]. Development of a tool that non-invasively measures metabolism in both compartments may be a powerful route for assessing oocyte quality-particularly as it would provide a measurement of the oocyte compartment of the COC. Label-free optical imaging has been previously used to characterize good and poor quality oocytes that were denuded of their cumulus cells [33,36,56,58]. However, as oocytes are dependent on factors derived from cumulus cells and normally mature in the presence of these cells [2], it is important to validate the robustness of label-free optical imaging to measure metabolism in the intact COC. This was analyzed in this study by comparing the optical redox ratio of the cumulus cells and oocyte with the rate of oxygen consumption in the whole COC, and in response to a series of mitochondrial inhibitors/uncoupler. Following this, we showed that optical imaging using confocal microscopy could detect metabolic differences associated with oocyte developmental potential. Toward demonstrating the potential for label-free optical imaging to be used clinically, we also used hyperspectral microscopy, which typically uses light at 1-2 orders of magnitude lower than confocal microscopy [35,57]. This makes it compatible for clinical use due to the absence of photodamage as shown in this and our previous study [49]. Importantly, results generated with hyperspectral imaging were comparable to those obtained from confocal microscopy. Our results showed that the optical redox ratio is an accurate assay to measure metabolic changes in the oocyte through our comparison with oxygen consumption. In cumulus cells, the response to the inhibitors/uncoupler yielded similar changes in optical redox ratio and oxygen consumption, with the exception of oligomycin. Oligomycin inhibits ATP synthase in the electron transport chain (Supplementary Figure SS3), leading to a decrease in oxygen consumption at complex IV, which was detected by the extracellular flux assay. Conversely, in cumulus cells the optical redox ratio did not change in response to oligomycin. This may be due to oligomycin causing a shift in metabolism from oxidative phosphorylation to glycolysis [59]. An increase in glycolysis would lead to elevated levels of NADH in the cytosol and a resultant increase in NAD(P)H fluorescence. A shift to glycolysis would not occur within the oocyte as it lacks active phosphofructokinase: the rate limiting step in glycolysis [2,33,60]. Future studies could utilize metabolic inhibitors such as 2-deoxy-d-glucose and oxamate to understand the impact of glycolysis on the levels of NAD(P)H and FAD in the cumulus cells. It is important to note that the signal intensity of NAD(P)H captured here may potentially be a mixture of both NADH and NADPH due to their near identical spectral properties [29,31]. Therefore, the results observed for NAD(P)H could potentially be attributed to (1) NADH produced from glycolysis and the tricaboxylic (TCA) cycle to generate ATP in the electron transport chain [29], or (2) cytosolic NADPH from the pentose phosphate pathway in response to oxidative stress [61,62]. In contrast, FAD can be directly linked to the activity of oxidative phosphorylation, as it is almost exclusively localized within mitochondria [29,63]. Importantly, we showed that changes in FAD intensity in response to the mitochondrial inhibitors/uncoupler were as hypothesized for the oocyte and cumulus cells-except for oligomycin in cumulus cells as discussed above. Thus, recording FAD autofluorescence is an accurate measurement of oxidative phosphorylation, particularly for the oocyte. This demonstrates the power of labelfree optical imaging to interrogate differences in metabolism between the oocyte and cumulus cells, while the OCR is limited in that it measures metabolism of the entire COC, potentially missing critical differences between the two cell compartments. The capacity of label-free optical imaging to detect metabolic differences associated with oocyte quality was demonstrated in this study using a mouse model of poor oocyte quality-etomoxir-induced inhibition of fatty acid oxidation during IVM. The importance of fatty acid oxidation for oocyte developmental potential was shown in this and previous studies [9,10,64]. While etomoxir does not directly target oxidative phosphorylation, we hypothesized that it would lead to a decrease in NAD(P)H and FAD as fatty acids are normally metabolized to acetyl coenzyme A to generate NADH and FADH 2 in the TCA cycle [9]. These metabolic cofactors can then be used in oxidative phosphorylation to generate ATP (Supplementary Figure SS4). As expected, we observed a decrease in NAD(P)H and FAD intensities when COCs were matured in the presence of etomoxir. This demonstrates the premise of label-free imaging to detect metabolic differences in COCs associated with oocyte quality. We are now turning our attention to a wide range of models, both animal and human, where oocyte competence is known to be an issue. To demonstrate clinical utility, we also used hyperspectral microscopy that requires low energy dose for imaging. In addition to confirming the results obtained from confocal imaging in the COC, hyperspectral microscopy was able to detect altered metabolism in blastocysts developed from oocytes with poor developmental potential. This shows that insults that occur during IVM persist in resultant blastocysts as seen in previous work when the insult occurred during oocyte maturation in vivo [65]. Importantly, we assessed the safety of imaging the COC using hyperspectral microscopy. This is of critical importance as previous studies have shown that light exposure can be harmful for the developing embryo [66][67][68]. Our results showed that exposure of COCs to this imaging modality had no impact on the ability of subsequent embryos to develop to the blastocyst stage, implant, or result in the birth of a live, healthy offspring. This is comparable to our results seen following hyperspectral imaging of the preimplantation embryo [49] and reaffirms the clinical potential of this imaging modality. Several imaging modalities are used to record autofluorescence from intracellular FAD and NAD(P)H including laser scanning confocal microscopy, fluorescence lifetime imaging, and hyperspectral microscopy [33-35, 49, 54, 69, 70]. The use of single channel optical imaging in this study is an accepted means of recording autofluorescence from these metabolic co-factors both in our field [33,34,36] and others [30,32,71,72]. Furthermore, this study also validates the use of this approach. In response to the mitochondrial inhibitors, changes in the optical redox ratio (FAD/ [NAD(P)H + FAD])were as hypothesized for the oocyte. This was also true for cumulus cells with the exception of oligomycin, the reasons for which are discussed above. It is important to note that this study was performed in a mouse model. Further evaluation of this imaging tool in preclinical studies and additional safety assessments in larger animal species are required prior to clinical implementation. As the developmental potential of an embryo is heavily reliant on the oocyte it is derived from, non-invasive selection and ranking of oocytes may assist in optimizing an IVF cycle to increase the likelihood of success [73]. This study demonstrates that label-free optical imaging of NAD(P)H and FAD is a sensitive assay for measuring metabolism in the oocyte and detects metabolic signatures associated with oocyte quality. We believe that label-free optical imaging is a promising technique for measuring oocyte developmental potential, and potentially improving IVF success. Supplementary material Supplementary material is available at BIOLRE online. Data availability The data underlying this article will be shared on reasonable request to the corresponding author. Authorship contributions H.M.B., J.G.T., and K.R.D. conceived the idea for the study. T.C.Y.T., S.M., and K.R.D. were involved in the experimental design. T.C.Y.T. was involved in data acquisition, generation of figures, and data analysis. T.C.Y.T., S.M., and K.R.D. were involved in the interpretation of data. T.C.Y.T. and K.R.D. wrote the first draft and most of the manuscript. All authors critically reviewed and edited the manuscript and approved the final version.
8,955.8
2022-06-17T00:00:00.000
[ "Biology" ]
Correlative Light and Electron Microscopy Reveals the HAS3-Induced Dorsal Plasma Membrane Ruffles Hyaluronan is a linear sugar polymer synthesized by three isoforms of hyaluronan synthases (HAS1, 2, and 3) that forms a hydrated scaffold around cells and is an essential component of the extracellular matrix. The morphological changes of cells induced by active hyaluronan synthesis are well recognized but not studied in detail with high resolution before. We have previously found that overexpression of HAS3 induces growth of long plasma membrane protrusions that act as platforms for hyaluronan synthesis. The study of these thin and fragile protrusions is challenging, and they are difficult to preserve by fixation unless they are adherent to the substrate. Thus their structure and regulation are still partly unclear despite careful imaging with different microscopic methods in several cell types. In this study, correlative light and electron microscopy (CLEM) was utilized to correlate the GFP-HAS3 signal and the surface ultrastructure of cells in order to study in detail the morphological changes induced by HAS3 overexpression. Surprisingly, this method revealed that GFP-HAS3 not only localizes to ruffles but in fact induces dorsal ruffle formation. Dorsal ruffles regulate diverse cellular functions, such as motility, regulation of glucose metabolism, spreading, adhesion, and matrix degradation, the same functions driven by active hyaluronan synthesis. Introduction Hyaluronan is synthesized in the inner face of the plasma membrane by three isoforms of hyaluronan synthases (HAS1, 2, and 3), unique enzymes that simultaneously elongate, bind, and extrude the growing hyaluronan chain directly into extracellular space [1]. Active synthesis of hyaluronan enhances plasma membrane dynamics and formation of several types of actin-based plasma membrane protrusions, like filopodia [2], lamellipodia [3], and membrane ruffles [4]. Ruffles are flat plasma membrane folds that use the actinbased machinery for their dynamic reshaping [5]. Depending on source of studies, their nomenclature is variable, including dorsal ruffles, waves (because they resemble waves on a water surface), linear ruffles [6], or circular dorsal ruffles [7]. Two structurally similar but distinct types of ruffling have been reported, depending on their cellular location. Peripheral ruffling is typically associated with lamellipodia formation and migration [8], while dorsal ruffling is connected to macropinocytosis [9] and internalization of growth factor receptors [10]. The studies of HAS-induced protrusions have been challenging because they are thin and fragile and difficult to preserve by fixation and other processing steps for light and electron microscopy [11][12][13]. Particularly protrusions arising from apical regions of the plasma membrane that are not adherent to the substratum are easily shrunk and collapsed. Thus their formation and maintenance of structure are still enigmatic and their functions are partly uncharacterized. The aim of this work was to develop a simple, costeffective method to correlate fluorescent signal from confocal laser scanning microscopy (CLSM) to fine morphology of the 2 International Journal of Cell Biology cells studied with scanning electron microscope (SEM). With this method, we confirmed the GFP-HAS3 localization into the plasma membrane and its protrusions, their bulbous tip complexes, as well as in plasma membrane ruffles. Surprisingly, it was found that in fact ruffle-like plasma membrane folds act as basis of HAS-induced protrusions, which has not been reported previously. Additionally, it was shown in detail for a first time that GFP-HAS3 not only localizes to dorsal ruffles but also induces dorsal ruffle formation. The results obtained in this work will bring us closer to the detailed characterization of hyaluronan-dependent plasma membrane protrusions, their regulation and functions. These structures are putative factors behind hyaluronan-driven effects in diseases like cancer, inflammation, and disorders of glucose metabolism. 2.3. Imaging, Image Processing, and Analysis. Next day, after transient transfection or induction of stable cells with doxycycline, the fluorescent images were obtained with Zeiss Axio Observer inverted microscope (10 × NA 1.3 or 63 × NA 1.4 oil objective) equipped with Zeiss LSM 700 confocal module (Carl Zeiss Microimaging GmbH, Jena, Germany) and an external DIC-capable transmitted-light channel. The cells were fixed with 2% glutaraldehyde either before or immediately after confocal imaging. To control the effect of hyaluronidase digestion on cell morphology, some samples were treated with streptomyces hyaluronidase (Seikagaku Kogyo Co., 5 TRU/mL, 30 min at 37 ∘ C) prior to fixation. Thereafter, the cells were routinely dehydrated in ascending series of ethanol and hexamethyldisilazane and finally coated with thin layer of gold. After processing, cells were imaged with Zeiss Sigma HD|VP (Zeiss, Oberkochen, Germany) scanning electron microscope at 3 kV. Image processing, like 3-dimensional rendering, analysis of images, and further modification, was performed using ZEN 2012 software (Carl Zeiss Microimaging GmbH), ImageJ 1.32 software (http://rsb.info.nih.gov/ij/), and Adobe Photoshop 8.0. SEM images (8-bit gray-level with pixel resolution of 1024 * 728) were utilized to quantify the plasma membrane ruffling. From both groups, 20 cells were selected for analysis. Image analysis was carried out using ImageJ. A representative area from apical plasma membrane of the cells was outlined and thresholding was utilized to define ruffles from the background. After thresholding, the images were segmented so that the ruffles were separated with an automatic algorithm. Then the number and areas of discrete ruffles in each cell were calculated. The density of ruffles was presented as the number of ruffles over a cell area of 100 m 2 . It should be noticed that the SEM images were recorded using In-Lens detector which is a concentric detector inside the SEM column. The use of the In-Lens detector prevented shadowing effect which is typical to conventional secondary electron imaging. Statistical Analysis. Statistical comparison was carried out using IBM SPSS Statistics software (ver. 19; SPSS Inc., Chicago, USA). Mann-Whitney test was used to evaluate the difference in ruffling between the noninduced and induced cells. values less than 0.05 were considered statistically significant. A simple Correlative Light and Electron Microscopy Method. This study presents a simple and easy process to image live or fixed cells by high resolution CLSM and SEM. The area imaged by CLSM was easily relocalized in SEM by utilizing gridded coverslips ( Figure 1). A simultaneous DIC imaging made recognition of the same cells easy via CLSM (Figure 1(a)) and SEM (Figure 1(b)). Some shrinking during fixation and dehydration was detected, but the overall morphology of the cells was well preserved after fixation and processing for SEM. The GFP-HAS3 overexpressing cells were easily detached during processing (arrows in Figure 1), which indicates a decreased adhesion as a result of HAS overexpression, a finding in line with previous results [22]. Higher Resolution Reveals Tip Expansions of Protrusions and Dorsal Ruffling. More detailed visualization of GFP-HAS3-positive MCF-7 cells was performed with higher magnification. A typical example of single GFP-HAS3-positive MCF-7 cell is shown in Figure 2. The morphology was relatively unchanged after sample processing for SEM ( Figure 2) and GFP-HAS3-induced protrusions were preserved. As shown before, many of the HAS-induced protrusions have a dilated tip complex [14,15], but it has been unclear if this dilation is an artefact resulting from high level of GFP-HAS3 fluorescence and light scattering. SEM confirmed that many of the HAS3-induced protrusions have a dilated tip complex (arrows in Figures 2(d) and 2(e)) and dilated areas are occasionally found also in the body of the protrusions (arrows in Figures 2(g) and 2(h)). The protrusions with dilated tips were similarly formed upon overexpression of HAS3 without GFP-tag (data not shown). Thickness of protrusions expressing high levels of GFP-HAS3 signal is overamplified in confocal microscopy because of light scattering. Protrusions also shrink and collapse during fixation and drying for SEM, which increases the differences in thickness observed between CLSM and SEM. As obtained by SEM, the HAS3induced protrusions are extremely thin, typically between 70 and 130 nm in diameter. A typical morphology of cells with GFP-HAS3 overexpression was spindle-shaped, with no clear, single lamellipodia or distinguishable "front and rear" (arrows in Figure 3). Another typical feature of GFP-HAS3-positive cells was ruffling of the plasma membrane, appearing mainly on the apical faces of the plasma membrane. Comparison of negative cells (asterisks in Figures 3(a) and 3(b)) and GFP-HAS3positive cells (arrows in Figures 3(a) and 3(b)) in transiently transfected cultures suggested that overexpression of HAS3 induces dorsal ruffling of the plasma membrane. To control if the ruffles are sensitive to hyaluronidase treatment, samples were treated with streptomyces hyaluronidase (5 TRU/mL, 30 min at 37 ∘ C) prior to fixation. The results showed that removal of hyaluronan did not completely destroy them (Figures 3(d) and 3(e)). This indicates that their structure is not completely dependent on pericellular hyaluronan. Quantification of GFP-HAS3-Induced Plasma Membrane Ruffling. To confirm the findings obtained with transiently transfected cells, the dorsal plasma membrane ruffling was quantified utilizing the stable, inducible MCF-7 cell line expressing GFP-HAS3 [22]. The quantification of these complex structures of variable shape and size would be more time consuming by light microscopy, requiring relatively high magnification and stacks of confocal images. Thus SEM images were utilized for these measurements. The results showed that GFP-HAS3 expression significantly induced both the area and the amount of dorsal plasma membrane ruffling as compared to noninduced cells (Figure 3(c)). Ultrastructure of Ruffles. Next high resolution SEM images were utilized to analyze the detailed structure of the HAS3-induced ruffles. Most of the ruffles appeared on the dorsal surface of the cell (Figures 2(b), 3, and 4) rather than on peripheral areas. GFP-HAS3 signal was detected on the apical plasma membrane of cells and accumulated on the ruffles. Furthermore, SEM revealed that many of the HAS-positive protrusions were embedded in the ruffle, suggesting that ruffles provide a basis for thinner protrusions. There was typically 1-5 or even higher number of protrusions arising from a single ruffle (arrows in Figure 4(b)). This indicates specific modeling of plasma membrane dynamics and the underlying actin network by HAS activity. Most of the ruffles were linear or curved, sheet-like protrusions of variable size, but some circular structures were also found (arrows in Figures 4(a), 4(b), 4(d), and 4(e)), which in line with the suggested dynamic formation of ruffles followed by constriction into circular structures before disappearing [5]. CLEM as a Novel Method to Study HAS-Induced Changes in Cell Morphology. In cell biology, multiple imaging methods are usually required to solve a specific scientific problem. However, each imaging technique has its own limitations and separately does not fully answer the specific questions. Since the discovery of HAS3-induced plasma membrane extensions [11,12,23], hyaluronan-dependent plasma membrane modifications have been a specific cell biological question waiting for clarification. To solve this question, a straightforward CLEM protocol was developed to combine fluorescent and electron microscopic information of a single cell by using CLSM and SEM. The gridded coverslips were effective in relocating cells quickly and reliably over large areas but also allowed to study the detailed morphology of the plasma membrane. This correlative method is an inexpensive and simple way to image live or fixed cells with confocal microscopy prior to viewing them at the electron microscope level. The method can be performed without specific equipment and can be reliably utilized to answer many questions related to detailed localization of proteins and morphology of cultured cells. Tip Complexes of Protrusions. We have shown before that many of the HAS-induced protrusions have a dilated tip complex [14,15], but it has been unclear so far if this dilation is an artefact resulting from accumulation of GFP-HAS3 signal and light scattering. In this work, the existence of GFP-HAS3positive bulbous expansions was confirmed in both the body and tips of the protrusions. Tip complexes act as putative sites of origin for shedding of hyaluronan-coated extracellular vesicles, which are potential carriers of hyaluronan and other active molecules [14,15]. Additionally, tip complexes of protrusions are putative enrichment sites for specific proteins and act as functional areas for glucose uptake [24], which may be crucial for increased needs of glucose for hyaluronan synthesis. Furthermore, growth of protrusions [14,15] and vesicle shedding [14,15] are dependent on glucose supply, which makes the future studies of these tip complexes and their role in hyaluronan metabolism especially interesting. Hyaluronan and Plasma Membrane Ruffling. Localization of hyaluronan and its receptors into ruffles has been reported in many different cell types, like fibroblasts [25], EGF-induced rat keratinocytes [26], CHO cells [4], and HaCaT cells [20]. Recently, also HAS3 localization into ruffles has been reported [20], but nobody has shown before that activity of hyaluronan synthesis itself induces ruffling of the plasma membrane. In this study, a clear increase was seen in the area and amount of ruffling after induction of HAS3 expression. Interestingly, growth factors that induce HAS expression and hyaluronan secretion, like EGF [26] and PDGF [27], induce dorsal membrane ruffing [10,28]. As shown in this work, plasma membrane ruffling together with thin protrusions is a putative mechanism for HAS to increase the plasma membrane area in order to enhance its own activity. Moreover, small GTPase rac, which is one of the targets of hyaluronan-CD44 signaling [29], is required for membrane ruffling [30]. Furthermore, one of the cofactors of hyaluronan, MMP2 [31], localizes to the tips of ruffles [32], suggesting their role in promoting the degradation of ECM and invasive potential of cells in collaboration with hyaluronan interactions. All of these observations support the hypothesis that hyaluronan synthesis and plasma membrane ruffling are linked together. Putative Mechanisms for Hyaluronan-Associated Plasma Membrane Ruffling. As shown here and before, active hyaluronan synthesis regulates both finger-like protrusions, like filopodia and microvilli, and sheet-like protrusive structures such as lamellipodia and ruffles. As well as finger-like protrusions, ruffles are structures usually erecting vertically from the dorsal cell surface. Pericellular, hydrated hyaluronan coat may provide a pulling force and mechanical support for growth and maintenance of these nonadherent structures. Interestingly, disturbance of the ECM-integrin interactions induces plasma membrane ruffling [8]. These interactions may be impaired in our model, where excess ventral hyaluronan due to HAS overexpression results in weaker cell attachment to the substratum. Furthermore, dorsal ruffles are suggested to arise as a consequence of inefficient lamellipodia adhesion and impaired migration rate [8], and assembly of ruffles inhibits actin flow to lamellipodia [7]. These findings may explain why highly increased levels of HAS overexpression lead to decreased lamellipodia formation and impaired migration rate [33]. Hyaluronan-Dependent Protrusions as Potential Sites for Glucose Uptake. Insulin or high glucose induces plasma membrane ruffling with simultaneous recruitment of glucose transporters (GLUT) into the plasma membrane ruffles of muscle cells [34]. Furthermore, GLUT translocation and insulin-stimulated glucose uptake are dependent on cortical actin remodeling and membrane ruffling [35]. These results indicate that hyaluronan, plasma membrane dynamics, and glucose metabolism are linked to each other and suggest that ruffles act as potential sites for cellular glucose uptake. This hypothesis fits well with previous reports on positive correlation between cellular glucose levels and activity of hyaluronan synthesis [14,15,36,37] and suggests that HAS directly or indirectly recruits glucose transporters into the plasma membrane ruffles and protrusion tip complexes to enhance its own activity. Future Prospects. We have shown previously that active hyaluronan production induces formation of plasma membrane extensions [11,12] and blebbing of extracellular vesicles [14,15]. The induction of plasma membrane ruffles, the main finding of this work, strengthens the role of active hyaluronan synthesis as a regulator of plasma membrane dynamics, regulating cell behavior in health and disease. Future studies will settle in more detail the dynamics and regulation of HAS-induced ruffles and their relationship with hyaluronan secretion and other cell functions, like secretion of extracellular vesicles and regulation of glucose uptake.
3,577
2015-09-10T00:00:00.000
[ "Biology" ]
Multiple Description Coding Based on Optimized Redundancy Removal for 3d Depth Map Multiple description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform) domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance. Introduction 3D images and video, which offer us a stereoscopic and immersive multimedia experience, have attracted increasing attention [1].Recently, the applications of 3D images and video have become more and more popular in many fields, including cinema, TV broadcasting, streaming and smart phones [2].Differently from the conventional 2D texture image, the depth map, as a special format of 3D image data, represents the distance information between a camera and the objects in the scene.Each pixel of the image frames of depth data streams represents the distance, which is to the camera plane from a specific object in the view of the depth sensor.Depth maps are often treated as gray-scale image sequences, which are similar to the luminance component of texture videos.However, the depth map has its own special characteristics.First, the depth map signal is much sparser than the texture image.It contains no texture, but has sharp object boundaries because the gray levels are nearly the same in most regions within an object, but change abruptly across boundaries.Furthermore, the depth map is not directly used for display, but it plays an important role in the virtual view synthesis. During the past few years, many studies have focused on the depth map.Edge-based compression was applied to the depth map in [3], which utilizes JBIG (Joint Bi-level Image experts Group) to encode the contours and DPCM (Differential Pulse Code Modulation) to compress both the pixels around the contours, as well as the uniform sparse sampling points of the depth map.The paper in [4] uses a symmetric smoothing filter to smooth the whole depth map before the image warping.While this method decreases the number of holes as the smoothing strength becomes stronger, some geometric distortions also become visible on regions with vertically straight lines.Instead of coding numerous views, two or three texture views with their corresponding depth maps are used in the multiview video coding (MVC) standard.Virtual views can be generated by specifying the preference viewing angles through depth-image-based rendering (DIBR) [5].Later, many scholars applied the compressive sensing (CS) theory to the texture image and depth map coding.Compressive sensing (CS) theory has been proposed by Cands and Donoho in [6][7][8].The CS applications in the image/video coding can be found in [9][10][11][12].In [9], according to a spatially sparse signal, a new image/video coding approach is proposed, which combines the CS theory with the traditional discrete cosine transform (DCT)-based coding method to achieve better compression efficiency.In [10], using block compressed sensing, image acquisition is conducted in a block-by-block manner through the same operator.It is claimed that it can sufficiently capture the complicated geometric structures of natural images.In [11,12], compressed sensing theory is used in the depth map compression, which subsamples the depth map in the frequency domain by choosing a sub-sampling matrix in order to ensure the incoherence of the measurements.The two new papers in [13,14] propose two kinds of exponential wavelet iterative shrinkage thresholding algorithm based on the theory of compressed sensing theory.The work in [13] is only for magnetic resonance images, and [14] is with the random shift for the magnetic resonance image.Two schemes are tested by four kinds of magnetic resonance images, and they have a better reconstruction quality and faster reconstruction speed compared to state-of-the-art algorithms, such as FCSA (Fast Composite Splitting Algorithm), ISTA (Iterative Shrinkage/Threshold Algorithm) and FISTA (Fast Iterative Shrinkage/Threshold Algorithm). Multiple description (MD) is a coding technique that has emerged as a promising approach to enhance the fault tolerance of a video delivery system [15].In 1993, the first work on MD coding was introduced in [16].The original video signal can be split into multiple bit streams (descriptions) using an MD encoder.Then, these MDs can be transmitted over multiple channels.There is a very low probability when all channels fail at the same time.Therefore, at the MD decoder, only one description received can reconstruct the video with acceptable quality, and the resultant distortion is called side distortion.Of course, more descriptions can produce the video with better quality.In a simple architecture of two channels, the distortion with two received descriptions is called central distortion [17]. Later, a number of methods of MDC have been presented, mainly including the MD scalar quantizer [18], the MD lattice vector quantizer [19,20], MD based on pairwise correlating transforms [21] and MD based on FEC (Forward Error Correction) [22].All of the above methods are difficult to apply in practical applications because these specially-designed MD encoders are not compatible with the widely-used standard codec.To overcome the limitation, some standard-compliant MD video coding schemes were designed to achieve promising results [23,24].Another significant class of MDC is based on pre-and post-processing.In preprocessing, the original source is split into multiple sub-sources before encoding, and then, these sub-sources can be encoded separately by the standard codec to generate multiple descriptions.The typical versions are MDC based on spatial sampling [25] and temporal sampling [26,27]. In this paper, considering the special characteristics of depth information, an MDC scheme is proposed based on optimized redundancy removal for the 3D depth map.Firstly, the depth map is transformed from the pixel domain into the DCT domain.According to the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed.Considering the low computing complexity, the entropy is adopted to estimate the bit rate in the optimization.Then, the processed depth map is transmitted, which is separated in accordance with odd and even rows or columns. We have differently adaptive decoding methods for the two situations.Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The rest of this paper is organized as follows.The proposed scheme is presented in Section 2. In Section 3, the performance of the proposed scheme is investigated through extensive simulation in different situations.Additionally, the paper is finally concluded in Section 4. Encoder The block diagram of the proposed encoder has been illustrated in Figure 1.Here, the depth maps of any view can be encoded as follows.At the encoder, the depth map can be firstly transformed into the DCT domain.According to the characteristics of the high frequency coefficients of DCT, a Lagrange optimization approach is designed to determine the redundancy to be removed.After inverse DCT (IDCT, Inverse Discrete Cosine Transformation), the processed depth map can be separated by the means of odd or even lines.Here, the generated sub-images can be encoded by any standard codec to produce two descriptions. Entropy 2016, 18, 245 3 of 17 have differently adaptive decoding methods for the two situations.Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost.The rest of this paper is organized as follows.The proposed scheme is presented in Section 2. In Section 3, the performance of the proposed scheme is investigated through extensive simulation in different situations.Additionally, the paper is finally concluded in Section 4. Encoder The block diagram of the proposed encoder has been illustrated in Figure 1.Here, the depth maps of any view can be encoded as follows.At the encoder, the depth map can be firstly transformed into the DCT domain.According to the characteristics of the high frequency coefficients of DCT, a Lagrange optimization approach is designed to determine the redundancy to be removed.After inverse DCT (IDCT, Inverse Discrete Cosine Transformation), the processed depth map can be separated by the means of odd or even lines.Here, the generated sub-images can be encoded by any standard codec to produce two descriptions.As is depicted in the block diagram, by removing the N lines' high frequency coefficients in the DCT domain, approximate redundant information can be reduced.Here, N is very important for the entire coding process, which has an effect on the reconstructed image quality at the decoder.The side and central distortion will be sensitive to the value N .When N increases, both side and central distortion will increase.On the other hand, the bit rates associated with side and central distortion will decrease with the increase of N .To achieve an optimized value for N , the following optimization problem should be solved.min ( ( ), ( ), ( ), ) In the above optimization problem, we need to consider the bit rate and the distortion of reconstructed images from the side and central decoder.Furthermore, it can be rewritten as a Lagrange optimized formulation, as shown in Equation ( 1).Here, let ( ) J N denote the Lagrange cost function.As is depicted in the block diagram, by removing the N lines' high frequency coefficients in the DCT domain, approximate redundant information can be reduced.Here, N is very important for the entire coding process, which has an effect on the reconstructed image quality at the decoder.The side and central distortion will be sensitive to the value N. When N increases, both side and central distortion will increase.On the other hand, the bit rates associated with side and central distortion will decrease with the increase of N. To achieve an optimized value for N, the following optimization problem should be solved.In the above optimization problem, we need to consider the bit rate and the distortion of reconstructed images from the side and central decoder.Furthermore, it can be rewritten as a Lagrange optimized formulation, as shown in Equation (1).Here, let JpNq denote the Lagrange cost function. RpNq, D S pNq and D C pNq denote the bit rate, side and central distortion of the image, respectively.Additionally, λ 1 and λ 2 are balanced parameters to play a role in RpNq, D S pNq and D C pNq.In this paper, the side distortion D S pNq and the central distortion D C pNq will be regarded as the same significance, so λ 1 " 1.In the experimental results, we will discuss the rate distortion performance under different values of λ 2 . Solving the optimization problem in Equation ( 1) is difficult in multiple description coding.Furthermore, given λ 2 , a triple pD S pNq, D C pNq, RpNqq needs to be computed for each mode selection.This requires actual time-consuming encoding and decoding for each mode.Taking the lower encoding complexity into account, we use the entropy of the image instead of the bit rate RpNq.Here, R e pNq represents the entropy of a depth map.It is defined as follows: R e pNq " ´k ÿ i"0 where p i presents the probability of the gray level i in the depth map and k is the number of gray levels. In this paper, we need to count the possibility of the appearance of each gray level in a depth map, which will be used in the calculation of the entropy.The equation of the possibility count is expressed as: where S is the number of symbols in an image and s i is the number of the pixels whose value is i. When we get all of the p i of an image, we can calculate the entropy according to Equation (2).Furthermore, the MSE (mean square error) can be used to represent to D S pNq and D C pNq.Let I 1 , I 2 denote two images, and then, their MSE can be obtained by the following formula: In this paper, the D C pNq is the MSE of the reconstructed image from the central decoder with the original map, and the final result of D S pNq is the average MSE of two reconstructed images from the side decoder.Here, MSE 1 is used to compute the distortion from one side decoder, and MSE 2 is used to compute the distortion from the other side decoder. In this paper, we use the iterative approach to solve N. Because the block in the encoder is set to be 16, the value of N is in the limit of 16 and 16m, and m is different in the different sequences.While m is too small, it may not find a better value for N, and the algorithm stops; while m is too large, the value of R e pNq will be increasing, and it will lose the role of λ 2 . The processed depth map is separated in accordance with odd and even lines in the DCT domain, then initializing N and λ 2 .We can compute the value of R e pNq, D S pNq and D C pNq with the two sub-images.Then, we calculate JpNq according to the front three values and circulate N. When N is not in the limit of 16 and 16m, the optimized N is chosen in accordance with the minimum value of JpNq.The basic optimized algorithm is shown in Figure 2. Decoder At the decoder, there are two ways for MDC.For a depth map, the distortion with both received descriptions is called central distortion, and the distortion with only one received description is called side distortion [15].The block diagram of the proposed decoder has been illustrated in Figure 3. Decoder At the decoder, there are two ways for MDC.For a depth map, the distortion with both received descriptions is called central distortion, and the distortion with only one received description is called side distortion [15].The block diagram of the proposed decoder has been illustrated in Figure 3. Firstly, the design of the side decoder is discussed as follows.For the side decoder, only Description 1 or 2 can be received.Since the processes for Descriptions 1 and 2 are the same, we use Description 1 as the example.After standard decoding, the decoded odd sub-image will be transformed into the DCT domain.Then, according to the resolution of the original image, adaptive zero-padding can be applied to fill the approximate zeros for the DCT coefficients of the odd sub-image.In the end, after inverse DCT, the reconstructed depth image can be achieved, which can be used to calculate the side distortion. Then, we will discuss the design of the central decoder.For the central decoder, both descriptions can be received.Therefore, after standard decoding, Descriptions 1 and 2 are rearranged according to the odd and even lines.Then, with the same process in the side decoder, adaptive zero-padding can be used to fill zeros in the DCT domain.At last, we can obtain the reconstruction with central distortion. Decoder At the decoder, there are two ways for MDC.For a depth map, the distortion with both received descriptions is called central distortion, and the distortion with only one received description is called side distortion [15].The block diagram of the proposed decoder has been illustrated in Figure 3. Firstly, the design of the side decoder is discussed as follows.For the side decoder, only Description 1 or 2 can be received.Since the processes for Descriptions 1 and 2 are the same, we use Description 1 as the example.After standard decoding, the decoded odd sub-image will be transformed into the DCT domain.Then, according to the resolution of the original image, adaptive zero-padding can be applied to fill the approximate zeros for the DCT coefficients of the odd sub-image.In the end, after inverse DCT, the reconstructed depth image can be achieved, which can be used to calculate the side distortion. Then, we will discuss the design of the central decoder.For the central decoder, both descriptions can be received.Therefore, after standard decoding, Descriptions 1 and 2 are rearranged according to the odd and even lines.Then, with the same process in the side decoder, adaptive zero-padding can be used to fill zeros in the DCT domain.At last, we can obtain the reconstruction with central distortion. Experimental Results To highlight the performance of the proposed scheme, the experiments are implemented on four standard sequences of depth maps, including Balloons, Kendo, Newspaper, Dancer and Pantomime, Experimental Results To highlight the performance of the proposed scheme, the experiments are implemented on four standard sequences of depth maps, including Balloons, Kendo, Newspaper, Dancer and Pantomime, which can be download from [28].The detailed information and the choice of some parameters of the tested sequences are shown in Table 1.This paper focuses on the comparison of the proposed optimized scheme against the conventional scheme.To prove the universality of the experiment, some groups of data in each sequence are selected for comparison.According to the MDC quality assessment, we compare not only the rate central distortion performance when two descriptions can be received correctly, but also the rate side distortion performance when only one description can be received.The objective and subjective quality comparison for the sequences of Balloons, Kendo, Newspaper, Dancer and Pantomime is presented in Figures 4-8. In the four figures, the horizontal axis represents the bit rate, and the vertical axis represents the PSNR values.Here, one view is chosen for each of the four sequences for comparison in (a): the first view of Balloons, Kendo and Dancer, the second view of Newspaper and the 37th view of Pantomime.Compared to the basic comparison scheme, our proposed depth map coding scheme can on average improve around 1.2 dB for Balloons, 0.4 dB for Kendo, 0.5 dB for Newspaper, 2.6 dB for Dancer and 3.0 dB for Pantomime in side distortion and around 1.2 dB for Balloons, 0.5 dB for Kendo, 0.8 dB for Newspaper, 3 dB for Dancer and 3.1 dB for Pantomime in central distortion at the same or a very close bit rate. Another view is chosen for each of the above four sequences for comparison in (b): the third view of Balloons, Kendo, Dancer, the fourth view of Newspaper and the 39th view of Pantomime.Compared to the basic comparison scheme, our proposed depth map coding scheme can on average improve around 0.7 dB for Balloons, 0.6 dB for Kendo, 0.6 dB for Newspaper, 3 dB for Dancer and 2.8 dB for Pantomime in side distortion and around 0.8 dB for Balloons, 0.6 dB for Kendo, 0.8 dB for Newspaper, 3.3 dB for Dancer and 2.8 dB for Pantomime in central distortion at the same or a very close bit rate. Given that the depth map is not directly used for display, the objective and subjective qualities of the rendered virtual views should be taken into account.In the aspect of objective quality, the synthesized virtual viewpoint image can be achieved by two original camera images.For example, for the tested sequences Balloons, Kendo and Dancer, the depth and texture from the first and third views can be used to synthesize the texture of the second view, while for the sequence Newspaper, the depth and texture from the second and fourth views can generate the texture of the third view, and for the sequence Pantomime, the depth and texture from the 37th and 39th views can be used to synthesize the texture of the 38th view.Here, the texture image is also compressed, and the comparison for synthesized images is presented in (c).Compared to the basic comparison scheme, our proposed synthesized scheme can one average improve around 1.1 dB for Balloons, 0.5 dB for Kendo, 0.4 dB for Newspaper, 2.2 dB for Dancer and 2.6 dB for Pantomime in side distortion and around 1.1 dB for Balloons, 0.6 dB for Kendo, 0.4 dB for Newspaper, 2.2 dB for Dancer and 2.7 dB for Pantomime in central distortion at the same or a very close bit rate. Furthermore, the advantages of the proposed scheme can be more clearly seen in Figures 9-12, in which the subjective quality of the synthesized virtual viewpoint of Kendo and Dancer are presented, especially in some parts denoted by the red rectangle. The structural similarity (SSIM) index [29] is a method for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos.SSIM is used for measuring the similarity between two images.The SSIM index is a full reference metric.SSIM is designed to improve on traditional methods, such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE), which have proven to be inconsistent with human visual perception.At the end of each figure, the SSIM values for the sequences of Balloons, Kendo, Newspaper, Dancer and Pantomime are reflected in the Tables 2-11; and Tables 2, 4, 6, 8 and 10 are the SSIM values of five sequences for the side decoder; Tables 3, 5, 7, 9 and 11 are the SSIM values of five sequences for central decoder.As we can see from the tables, the depth map formed by the proposed scheme and the SSIM values of the synthesized image with the uncompressed original images are close to one, and the values of the proposed scheme are greater than or equal to that of the comparison scheme.This indicates that the proposed scheme is better than the basic comparison scheme. Pantomime are reflected in the Tables 2-11; and Tables 2, 4, 6, 8, 10 are the SSIM values of five sequences for the side decoder; Tables 3, 5, 7, 9, 11 are the SSIM values of five sequences for central decoder.As we can see from the tables, the depth map formed by the proposed scheme and the SSIM values of the synthesized image with the uncompressed original images are close to one, and the values of the proposed scheme are greater than or equal to that of the comparison scheme.This indicates that the proposed scheme is better than the basic comparison scheme.(a1) (b1) Conclusions In this paper, the depth map is removed as a part of the high-frequency component in the DCT domain.An effective and adaptive optimization scheme encoding was accommodated in the proposed scheme to achieve a better rate and central/side distortion performance.It can be seen from the experiment that the proposed scheme also has a better subjective quality.Therefore, our proposed scheme is clearly a worthy choice for depth map coding. 1  and 2  the bit rate, side and central distortion of the image, respectively.Additionally, are balanced parameters to play a role in ( ) be regarded as the same significance, so 1 1   .In the experimental results, we will discuss the rate distortion performance under different values of 2  .Solving the optimization problem in Equation (1) is difficult in multiple description coding.Furthermore, given 2  , a triple ( ( ), ( ), ( )) S C D N D N R N needs to be computed for each mode selection.This requires actual time-consuming encoding and decoding for each mode.Taking the lower Figure 3 . Figure 3.The diagram of the proposed decoder. Figure 3 . Figure 3.The diagram of the proposed decoder. Figure 4 . Figure 4. Objective quality comparison for the sequence Balloons: (a) the left view of depth map; (b) the right view of depth map; (c) the synthesized virtual viewpoint. Figure 4 . Figure 4. Objective quality comparison for the sequence Balloons: (a) the left view of depth map; (b) the right view of depth map; (c) the synthesized virtual viewpoint. Figure 6 . Figure 6.Objective quality comparison for the sequence Newspaper: (a) the left view of depth map; (b) the right view of depth map; (c) the synthesized virtual viewpoint. Table 1 . The parameters of the tested sequences. Table 2 . The SSIM values of the sequence Balloons for the side decoder. Table 3 . The SSIM values of the sequence Balloons for the central decoder. Table 2 . The SSIM values of the sequence Balloons for the side decoder. Table 3 . The SSIM values of the sequence Balloons for the central decoder. Table 6 . The SSIM values of the sequence Newspaper for the side decoder. Table 7 . The SSIM values of the sequence Newspaper for the central decoder.
5,761
2016-06-29T00:00:00.000
[ "Computer Science", "Engineering" ]
The Sloan Digital Sky Survey Reverberation Mapping Project: Estimating Masses of Black Holes in Quasars with Single-Epoch Spectroscopy It is well known that reverberation mapping of active galactic nuclei (AGN) reveals a relationship between AGN luminosity and the size of the broad-line region, and that use of this relationship, combined with the Doppler width of the broad emission line, enables an estimate of the mass of the black hole at the center of the active nucleus based on a single spectrum. This has been discussed in numerous papers over the last two decades. An unresolved key issue is the choice of parameter used to characterize the line width; generally, most researchers use FWHM in favor of line dispersion (the square root of the second moment of the line profile) because the former is easier to measure, less sensitive to blending with other features, and usually can be measured with greater precision. However, use of FWHM introduces a bias, stretching the mass scale such that high masses are overestimated and low masses are underestimated. Here we investigate estimation of black hole masses in AGN based on individual or"single epoch"observations, with a particular emphasis in comparing mass estimates based on line dispersion and FWHM. We confirm the recent findings that, in addition to luminosity and line width, a third parameter is required to obtain accurate masses and that parameter seems to be Eddington ratio. We present simplified empirical formulae for estimating black hole masses from the Hbeta (4861 A) and C IV (1549 A) emission lines. The presence of emission lines with Doppler widths of thousands of kilometers per second is one of the defining characteristics of active galactic nuclei (Burbidge & Burbidge 1967;Weedman 1976). It was long suspected that the large line widths were due to motions in a deep gravitational potential and this implied very large central masses (e.g., Woltjer 1959), as did the Eddington limit (Tarter & McKee 1973). Under a few assumptions, the central mass is M ∝ V 2 R, where V is the Doppler width of the line and R is the size of the broad-line region (BLR). It is the latter quantity that is difficult to determine. An early attempt to estimate R by Dibai (1980) was based on the assumption of constant emissivity per unit volume, but led to an incorrect dependence on luminosity as in this case, luminosity is proportional to volume, so R ∝ L 1/3 . Wandel & Yahil (1985) inferred the BLR size from the Hβ luminosity. Other attempts were based on photoionization physics (see Ferland & Shields 1985;Osterbrock 1985). Davidson (1972) found that the relative strength of emission lines in ionized gas could be characterized by an ionization parameter where Q(H) is the rate at which H-ionizing photons are emitted by the central source and n H is the particle density of the gas. The ionization parameter U is proportional to the ratio of ionization rate to recombination rate in the BLR clouds. The similarity of emission-line flux ratios in AGN spectra over orders of magnitude in luminosity suggested that U is constant, and the presence of C III] λ1909 set an upper limit on the density n H 10 9.5 cm −3 (Davidson & Netzer 1979). Since L ∝ Q(H), this naturally led to the prediction that the BLR radius would scale with luminosity as R ∝ L 1/2 . Unfortunately, best-estimate values for Q(H) and n H led to a significant overestimate of the BLR radius as a consequence of the simple but erroneous assumption that all the broad lines arise cospatially (i.e., models employed a single representative BLR cloud). With the advent of reverberation mapping (hereafter RM; Blandford & McKee 1982;Peterson 1993), direct measurements of R enabled improved black hole mass determinations. Attempts to estimate black hole masses based on early RM results and the R ∝ L 1/2 prediction included those of Padovani & Rafanelli (1988), Koratkar & Gaskell (1991), and Laor (1998). The first multiwavelength RM campaigns demonstrated ionization stratification of the BLR Krolik et al. 1991;Peterson et al. 1991) and this eventually led to identification of the virial relationship, R ∝ V −2 , 2000Onken & Peterson 2002;Kollatschny 2003;Bentz et al. 2010), that gave reverberation-based mass measurements higher levels of credibility. Of course, the virial relationship demonstrates only that the central force has a R −2 dependence, which is also characteristic of radiation pressure; whether or not radiation pressure from the continuum source is important has not been clearly established (Marconi et al. 2008(Marconi et al. , 2009Netzer & Marziani 2010). If radiation pressure in the BLR turns out to be important, then the black hole masses, as we discuss them here, are underestimated. Masses of AGN black holes are computed as where V is the line width, R is the size of the BLR from the reverberation lag, and G is the gravitational constant. The quantity in parentheses is often referred to as the virial product µ; it incorporates the two observables in RM, line width and time delay τ = R/c, and is in units of mass. The scaling factor f is a dimensionless quantity of order unity that depends on the geometry, kinematics, and inclination of the AGN. Throughout most of this work, we ignore f (i.e., set it to unity) and work strictly with the virial product. While reverberation mapping has emerged as the most effective technique for measuring the black hole masses in AGNs , it is resource intensive, requiring many observations over an extended period of time at fairly high cadence. Fortunately, observational confirmation of the R-L relationship (Kaspi et al. , 2005Bentz et al. 2006aBentz et al. , 2009aBentz et al. , 2013 enables "single-epoch" (SE) mass estimates because, in principle, a single spectrum could yield V and also R, through measurement of L (e.g., Wandel, Peterson, & Malkan 1999;McLure & Jarvis 2002;Vestergaard 2002;Corbett et al. 2003;Vestergaard 2004;Kollmeier et al. 2006;Fine et al. 2008;Shen et al. 2008a,b;Vestergaard et al. 2008). Of the three strong emission lines generally used to estimate central black hole masses, the R-L relationship is only well-established for Hβ λ4861 , and references therein, but see the discussion in §3.3). Empirically establishing the R-L relationship for Mg II λ2798 (Homayouni et al. 2020) and C IV λ1549 Kaspi et al. 2007;Trevese et al. 2014;Lira et al. 2018;Grier et al. 2019;Hoormann et al. 2019) has been difficult. Masses based on the C IV λ1549 emission line, in particular, have been somewhat controversial. Some studies claim that there is good agreement between masses based on C IV and those measured from other lines Greene, Peng, & Ludwig 2010;Assef et al. 2011). On the other hand, there are several claims that there is inadequate agreement with masses based on other emission lines (Baskin & Laor 2005;Netzer et al. 2007;Sulentic et al. 2007;Shen et al. 2008b;Shen & Liu 2012;Trakhtenbrot & Netzer 2012). Denney et al. (2009a) and Denney et al. (2013), however, note that there are a number of biases that can adversely affect single-epoch mass estimates, with low S/N "survey quality" data being an important problem with some of the studies for which poor agreement between C IV and other lines is found. It has also been argued, however, that some fitting methodologies are more affected by this than others . There have been more recent papers that attempt to correct C IV mass determinations to better agree with those based on other lines (e.g., Runnoe et al. 2013;Coatman et al. 2017;Mejía-Restrepo et al. 2018;Marziani et al. 2019). Characterizing Line Widths It is our suspicion that the apparent difficulties with C IV-based masses trace back not only to the S/N issue, but also to how the line widths are characterized. It has been customary in AGN studies to characterize line widths by one of two parameters, either FWHM or the line dispersion σ line , which is defined by where P (λ) is the emission-line profile as a function of wavelength and λ 0 is the line centroid, While both FWHM and σ line have been used in the virial equation to estimate AGN black hole masses, they are not interchangeable. It is well known that AGN line profiles depend on the line width (Joly et al. 1985): broader lines have lower kurtosis, i.e., they are "boxier" rather than "peakier." Indeed, for AGNs, the ratio FWHM/σ line has been found to be a simple but useful characterization of the line profile (Collin et al. 2006;Kollatschny & Zetzl 2013). Each line-width measure has practical strengths and weaknesses Wang et al. 2020). The line dispersion σ line is more physically intuitive, but it is sensitive to the line wings, which are often badly blended with other features. All three of the strong lines usually used to estimate masses -Hβ λ4861, Mg II λ2798, and C IV λ1549 -are blended with other features: the Fe II λ4570 and Fe II λλ5190, 5320 complexes (Phillips 1978) and He II λ4686 in the case of Hβ, the UV Fe II complexes in the case of Mg II, and He II λ1640 in the case of C IV. The FWHM can usually be measured more precisely than σ line (although Peterson et al. 2004 note that the opposite is true for the rms spectra, described below, which are sometimes quite noisy), but it is not clear that FWHM yields more accurate mass measurements. In practice, FWHM is used more often than σ line because it is relatively simple to measure and can be measured more precisely while σ line often requires deblending or modeling the emission features, which does not necessarily yield unambiguous results. There are, however, a number of reasons to prefer σ line to FWHM as the line-width measure for estimating AGN black hole masses. Fromerth & Melia (2000) point out that σ line better characterizes an arbitrary or irregular line profile. Peterson et al. (2004) note that σ line produces a tighter virial relationship than FWHM, and Denney et al. (2013) find better agreement between C IV-based and Hβ-based mass estimates by using σ line rather than FWHM (these latter two are essentially the same argument). In the case of NGC 5548, for which there are multiple reverberation-based mass measures, a possible correlation with luminosity is stronger for FWHM-based masses than for σ line -based masses, suggesting that the former are biased as the same mass should be recovered regardless of the luminosity state of the AGN (Collin et al. 2006;Shen & Kelly 2012). A possibly more compelling argument for using σ line instead of FWHM is bias in the mass scale that is introduced by using FWHM as the line width. Steinhardt & Elvis (2010) used single-epoch masses for more than 60,000 SDSS quasars (Shen et al. 2008b) with masses computed using FWHM. They found that, in any redshift bin, if one plots the distribution of mass versus luminosity, the higher mass objects lie increasingly below the Eddington limit; they refer to this as the "sub-Eddington boundary." There is no physical basis for this. Rafiee & Hall (2011) point out, however, that if the quasar masses are computed using σ line instead of FWHM, the sub-Eddington boundary disappears: the distribution of quasar black hole masses approaches the Eddington limit at all masses. Referring to Figure 1 of Rafiee & Hall (2011), the distribution of quasars in the mass vs. luminosity diagram is an enlongated cloud of points whose axis is roughly parallel to the Eddington ratio when σ line is used to characterize the line width. However, when FWHM is used, the axis of the distribution rotates as the higher masses are underestimated and the lower masses are overestimated. However, the apparent rotation of the mass distribution is in the same sense that is expected from the Malmquist bias and a bottom heavy quasar mass function (Shen 2013). Unfortunately, these arguments are not statistically compelling. Examination of the M BH -σ * relation using FWHM-based and σ line -based masses is equally unrevealing . In reverberation mapping, a further distinction among line-width measures must be drawn since either FWHM or σ line can be measured in the mean spectrum, where F i (λ) is the flux in the i th spectrum of the time series at wavelength λ and N is the number of spectra, or they can be measured in the rms residual spectrum (hereafter simply "rms spectrum"), which is defined as In this paper, we will specifically refer to the measurements of σ line in the mean spectrum as σ M and in the rms spectrum as σ R . Similarly, FWHM M refers to FWHM of a line in the mean spectrum or a single-epoch spectrum and FWHM R is the FWHM in the rms spectrum. It is common to use σ R as the line-width measure for determining black hole masses from reverberation data -it is intuitatively a good choice as it isolates the gas in the BLR that is actually responding to the continuum variations. As noted previously, the strong and strongly variable broad emission lines can be hard to isolate as they are blended with other features. In the rms spectra, however, the contaminating features are much less of a problem because they are generally constant or vary either slowly or weakly and thus nearly disappear in the rms spectra. Since the goal is to measure a black hole mass from a single (or a few) spectra, we must use a proxy for σ R . Here we will attempt to determine if either σ M or FWHM M in a single or mean spectrum can serve as a suitable proxy for σ R ; we know a priori that there are good, but non-linear, correlations between σ R and both σ M and FWHM M . It therefore seems likely that either σ M or FWHM M could be used as a proxy for σ R . Investigation of the relationship among the line-width measures motivated a broader effort to produce easy-to-use prescriptions for computing accurate black hole masses using Hβ and C IV emission lines and nearby continuum fluxes measurements for each line. We note that we do not discuss Mg II RM results in this contribution as the present situation has been addressed rather thoroughly by Bahk, Woo, & Park (2019) and new SDSS-RM results will be published soon (Homayouni et al. 2020). In §2, the data used in this investigation are described. In §3, the relationship between the Hβ reverberation lag and different measures of the AGN luminosity are considered, and we identify the physical parameters to lead to accurate black-hole mass determinations. In §4, we will similarly discuss masses based on C IV. In §5, we present simple empirical formulae for estimating black hole masses from Hβ or C IV. The results of this investigation and our future plans to improve this method are outlined in §6. Our results are briefly summarized in §7. Throughout this work, we assume H 0 = 72 km s −1 Mpc −1 , Ω matter = 0.3 and Ω Λ = 0.7. Data We use two high-quality databases for this investigation: 1. Spectra and measurements for previously reverberation-mapped AGNs, for Hβ (Table A1) and for C IV (Table A2). These are mostly taken from the literature (see also Bentz & Katz 2015 for a compilation 1 ). Sources without estimates of hostgalaxy contamination to the optical luminosity L(5100Å) have been excluded. This database provides the fundamental R-L calibration for the single-epoch mass scale. In this contribution, we will refer to these collectively as the "reverberationmapping database (RMDB)". 2. Spectral measurements from the Sloan Digital Sky Survey Reverberation Mapping Program , hereafter "SDSS-RM" or more compactly simply as "SDSS"). We use both Hβ (Table A3) and C IV (Table A4) data from the 2014-2018 SDSS-RM campaign (Grier et al. 2017b;Shen et al. 2019;Grier et al. 2019). Each spectrum is comprised of the average of the individual spectra obtained for each of the 849 quasars in the SDSS-RM field. In addition, because C IV RM measurements remain rather scarce, we augmented the C IV sample with measurements from (hereafter VP06), who combined single-epoch luminosity and line-width measurements from archival UV spectra with Hβ-based mass measurements of the objects in Table A1. The UV parameters are given in Table A5; we note, however, that we have excluded 3C 273 and 3C 390.3 because they both have uncertainities in their virial product larger than 0.5 dex; the former was a particular problem because there were far more measurements of UV parameters for this source than for any other and the combination of a large number of measurements and a poorly constrained virial product conspired to disguise real correlations. All SDSS-RM spectra have been reduced and processed as described by Shen et al. (2015) and Shen et al. (2016), including post-processing with PrepSpec (Horne, in preparation). We note that only lags (τ ), line dispersion in the rms spectrum (σ R ), For the purpose of mass estimation, we need to establish relationships based on the most reliable data. Many of the SDSS average spectra are still quite noisy, so we imposed quality cuts. Even though we are for the most part restricting our attention to the SDSS-RM quasars for which there are measured lags for Hβ (44 quasars) or C IV (48 quasars), we impose these cuts on the entire sample for the sake of later discussion. The first quality condition is that for both V = FWHM M and V = σ M , since AGNs with lines narrower than 1000 km s −1 are probably Type 2 AGNs; there are some Type 1 AGNs with line widths narrower than this, including several in Table A1, but these are low-luminosity AGNs (e.g., Greene & Ho 2007), not SDSS quasars. The second quality condition is that the best fit value V (BF) must lie in the range for both FWHM and σ line . A third quality condition is a "signal-to-noise" (S/N ) requirement that the line width must be significantly larger than its uncertainty. Some experimentation showed that is a good criterion for both V = FWHM M and V = σ M to remove the worst outliers from the line-width comparisons discussed in §3.2 and §4.1. Finally, we removed quasars that were flagged by Shen et al. (2019) as having broad absorption lines (BALs), mini-BALs, or suspected BALs in C IV. The effect of each quality cut on the size of the database available for each emission line is shown in Table 1. Of the 44 SDSS-RM quasars with measured Hβ lags, 12 failed to meet at least one of the quality criteria, usually the S/N requirement, thus reducing the SDSS-RN Hβ sample to 32 quasars. Three quasars with C IV reverberation measurements (RMID 362, 408, and 722) were rejected for significant BALs, thus reducing the SDSS-RM C IV reverberation sample to 45 quasars. As we will show in §5, another effect of imposing quality cuts is, not surprisingly, that it removes some of the lower luminosity sources from the sample. Fitting Procedure Throughout this work, we use the fitting algorithm described by Cappellari et al. (2013) that combines the Least Trimmed Squares technique of Rousseeuw & van Driessen (2006) and a least-squares fitting algorithm which allows errors in all variables and includes intrinsic scatter, as implemented by Dalla Bontà et al. (2018). Briefly, the fits we perform here are of the general form where x 0 is the median value of the observed parameter x. The fit is done iteratively with 5σ rejection (unless stated otherwise) and the best fit minimizes the quantity where ∆x i and ∆y i are the errors on the variables x i and y i , and ε y is the sigma of the Gaussian describing the distribution of intrinsic scatter in the y coordinate; ε y is iteratively adjusted so that the χ 2 per degree of freedom ν = N − 2 has the value of unity expected for a good fit. The observed scatter is The value of ε y is added in quadrature when y is used as a proxy for x. The bivariate fits are intended to establish the physical relationships among the various parameters and also to fit residuals. The actual mass estimation equations that we use will be based on multivariate fits of the general form where the parameters are as described above, plus an additional observed parameter y that has median value y 0 . Similarly to linear fits, the plane fitting minimizes the quantity with ∆x i , ∆y i and ∆z i as the errors on the variables (x i , y i , z i ), and ε z as the sigma of the Gaussian describing the distribution of intrinsic scatter in the z coordinate; ε z is iteratively adjusted so that the χ 2 per degrees of freedom ν = N − 3 has the value of unity expected for a good fit. The observed scatter is 3. MASSES BASED ON Hβ 3.1. The R-L Relationships In this section, we examine the calibration of the fundamental Hβ R-L relationship using various luminosity measures. The analysis in this section is based only on the RMDB sample in Table A1 because all these sources have been corrected for host-galaxy starlight. To obtain accurate masses from Hβ, contaminating starlight from the host galaxy must be accounted for in the luminosity measurement, or the mass will be overestimated. For reverberation-mapped sources, this has been done by modeling unsaturated images of the AGNs obtained with the Hubble Space Telescope . The AGN contribution was removed from each image by modeling the images as an extended host galaxy plus a central point source representing the AGN. The starlight contribution to the reverberation-mapping spectra is determined by using simulated aperture photometry of the AGN-free image. In the left panel of Figure 1, we show the Hβ lag as a function of the AGN continuum with the host contribution removed in each case. This essentially reproduces the result of Bentz et al. (2013) as small differences are due solely to improvements in the quality and quantity of the RM database [cf. Table A1]; we give the best-fit values to equation (10) in the first row of Table 2. Accounting for the host-galaxy contribution in the same way for large number of AGNs, such as those in SDSS-RM (not to mention the entire SDSS catalog), is simply not feasible. It is well-known, however, that there is a tight correlation between the AGN continuum luminosity and the luminosity of Hβ (e.g., Yee 1980;Ilić et al. 2017), and it has indeed been argued that the Hβ emission-line luminosity can be used as a proxy for the AGN continuum luminosity for reverberation studies Figure 1. Left: The rest-frame Hβ lag in days is shown as a function of the AGN luminosity LAGN(5100Å) in erg s −1 . The host-galaxy starlight contribution has been removed by using unsaturated HST images (see Bentz et al. 2013). Right: The Hβ lag in days is shown as a function of the broad Hβ luminosity L(Hβ broad ) in erg s −1 . The narrow component of Hβ has been removed in each case where it was sufficiently strong (i.e., easily identifiable) to isolate. In both panels, the solid line shows the best-fit to the data using equation (10) with coefficients given in Table 2. The short dashed lines show the ±1 σ uncertainty (equivalent to enclosing 68% of the values for a Gaussian distribution) and the long dashed lines show the 2.6σ uncertainties (equivalent to enclosing 99% of the values for a Gaussian distribution). . However, in some of the reverberation-mapped sources, narrow-line Hβ contributes significantly to the total Hβ flux; NGC 4151 is an extreme example (e.g., Antonucci & Cohen 1983;Bentz et al. 2006a;Fausnaugh et al. 2017). Whenever the narrow-line component can be isolated, it has been subtracted from the total Hβ flux. In Figure 2, we show the tight relationship between L AGN (5100Å) and L(Hβ broad ); the best-fit coefficients for this relationship are given in Table 2. In the right panel of Figure 1, we show the Hβ lag as a function of the luminosity of the broad component of Hβ, with the narrow component removed whenever possible. We give the best-fit values to the equation (10) in the second row of Table 2, which shows that the slope of this relationship is nearly identical to the slope of the R-L relationship using the AGN continuum. The luminosity of the Hβ broad component is thus an excellent proxy for the AGN luminosity and requires only removal of the Hβ narrow component (at least when it is significant) which is much easier than estimating the starlight contribution to the continuum luminosity at 5100Å. Moreover, by using the line flux instead of the continuum flux, we can include core-dominated radio sources where the continuum may be enhanced by the jet component (Greene & Ho 2005). This is therefore the R-L Table A1. The black solid line is the regression of L(Hβ broad ) on LAGN(5100Å); the red dotted line is the regression of LAGN(5100Å) on L(Hβ broad ), which we use in equation (24). The coefficients for both fits are given in Table 2. Line-Width Relationships We now consider the use of σ M and FWHM M as proxies for σ R (cf. Collin et al. 2006;Wang et al. 2019). The left panel of Figure 3 shows the relationship between σ R (Hβ), the Hβ line dispersion in the rms spectrum, and σ M (Hβ), the Hβ line dispersion in the mean spectrum. The relationship is nearly linear (slope = 1.085 ± 0.045) and the intrinsic scatter is small (0.079 dex). The fit coefficients are given in the first row of Table 3. We also show in the right panel of Figure 3 the relationship between σ R (Hβ) and the FWHM of Hβ in the mean spectrum, FWHM M (Hβ). The fit coefficients are given in the second row of Table 3. The relationship is far from linear (slope = 0.535 ± 0.042), and the scatter ε y is larger than it is for the σ R (Hβ)-σ M (Hβ) relationship, even after removal of the notable outliers. While it is clear that σ M (Hβ) is an excellent proxy for σ R (Hβ), the value of FWHM M (Hβ) is less clear. Nevertheless we will fit both versions in order to understand the relative merits of each. Single-Epoch Predictors of the Virial Product In the previous subsections, we have re-established the correlations between τ (Hβ) and L(Hβ broad ) and between σ R (Hβ) and both σ M (Hβ) and FWHM M (Hβ). As a first approximation for a formula to estimate single-epoch masses, we fit the following (Table A1) and open green triangles are for the SDSS sample (Table A3). The solid lines are best fits to equation (10) with coefficients in Table 3. The short dashed and long dashed lines indicate the ±1σ and ±2.6σ envelopes, respectively, and the red dotted lines indicate where the two line-width measures are equal. Crosses are points that were rejected at the 2.6σ (99%) level and are colored-coded like the circles. The relationship on the left is nearly linear (slope = 1.085±0.045) and the scatter εy is low (0.079 dex). It is clear in the right panel that FWHMM(Hβ) and σR(Hβ) are well-correlated, but the relationship is significantly non-linear (slope = 0.535 ± 0.042), the scatter εy is slightly larger (0.106 dex), and there are several significant outliers. and The results of these fits based on the combined RMDB data (Table A1) and SDSS data (Table A3) are given in the first two rows of Table 4, and illustrated in the upper panels of Figure 4. Using these coefficients, we have initial fits and for σ M (Hβ) and FWHM M (Hβ), respectively. The luminosity coefficient b and the line-width coefficient c are roughly as expected from the virial relationship and the R-L relationship, and we note that the line-width coefficient for FWHM M (c = 1.039) is much smaller than that of σ M (c = 1.757), as expected from Figure 3. It is clear that both equations (18) and (19) overestimate masses at the low end and underestimate them at the high end, thus biasing the prediction. This suggests that another parameter is required for the single-epoch virial product prediction. (16) and (17) on the left and right, respectively, with coefficients from Table 4 compared with the actual RM measurements for the same sources. Blue filled circles represent RMDB data (Table A1) and green open triangles represent SDSS data (Table A3). The solid line shows the best fit to the data, and the red dotted line shows where the two values are equal. The short and long dashed lines show the ±1σ and ±2.6σ envelopes, respectively. It is clear that this is an inadequate virial product predictor as it systematically underestimates higher masses and overestimates lower masses. The two lower panels show the same relationship after the empirical corrections as embodied in equations (36) and (38) for σM and FWHMM, respectively. The best fit lines cover the equality lines. We investigated the possibility of another parameter by plotting the residuals ∆ log µ = log µ RM − log µ SE against other parameters, specifically luminosity, mass (virial product), Eddington ratio, emission-line lag, line width and line-width ratio FWHM/σ line for both mean and rms spectra. The most significant correlation between the virial product residuals and other parameters was for Eddington ratio, which has been a result of other recent investigations Grier et al. 2017b;Du et al. 2018;Du & Wang 2019;Fonseca Alvarez et al. 2019;Martínez-Aldama et al. 2019). To determine the Eddington ratio, we start with the Eddington luminosity where m e is the electron mass and σ e is the Thomson cross-section. The black hole mass is log M = log f + log µ and, as explained in the Appendix, we assume log f = 0.683 ± 0.150 (Batiste et al. 2017) so the Eddington luminosity is log L Edd = log f + 38.099 + log µ RM = 38.782 + log µ RM . The bolometric luminosity can be obtained from the observed 5100Å AGN luminosity plus a bolometric correction We ignore inclination effects and, following Netzer (2019), we use Since we are using L(Hβ broad ) as a proxy for L AGN (5100Å), we substitute L(Hβ broad ) for L AGN (5100Å) by fitting the luminosities in Table A1, yielding (see Table 2) so we can write the bolometric luminosity as The Eddington ratioṁ is given by 2 logṁ = log L bol − log L Edd . Using equations (25) and (21), the Eddington ratio can then be written as To correct the single-epoch masses for Eddington ratio, we fit the equation and use this as a correction to our initial fits, equations (18) and (19). The best-fit parameters for σ M and FWHM M -based predictors of µ SE are given in Table 5 and shown in Figure 5. Combining the correction equation (28) with the best-fit coefficients in Table 5 and equations (18) and (19) yield the corrected single-epoch masses and log µ SE (Hβ) = 6.974 for σ M and FWHM M , respectively. Once the dependence on Eddington ratio is removed, the residuals do not appear to correlate with other properties. The intrinsic scatter about the final residuals is 0.197 dex for σ M -based masses and 0.204 dex for FWHM M -based masses. Fundamental Relationships As noted in §1, the veracity of C IV-based mass estimates is unclear and remains controversial. The ideal situation would be to have a large number of AGNs with both C IV and Hβ reverberation measurements to effect a direct comparison. There are, unfortunately, very few AGNs that have both; indeed Table A2 of the Appendix lists all C IV results for which there are corresponding Hβ measurements in Table A1. For the few sources with both C IV and Hβ reverberation measurements, we plot the virial products µ RM (C IV) and µ RM (Hβ) in Figure 6; these are in each case a weighted mean value of for each of the observations of Hβ and C IV for the AGNs that appear in both Tables A1 and A2. The close agreement of these values reassures us that the C IV-based RM masses can be trusted, at least over the range of luminosities sampled. We now need to consider whether or not luminosities and mean line widths are suitable proxies for emission-line lag and rms line widths in the case of C IV. In Figure 7, we show the relationship between the UV continuum luminosity L(1350Å) and the C IV emission line lag τ (C IV) based on the C IV data in Table A2, plus the SDSS-RM C IV data in Table A4. The coefficients of the fit are given in Table 2. We note again that we have removed from the Grier et al. (2019) sample in Table A4 three quasars with BALs, thus reducing the sample size from 48 to 45. The slope of the C IV R-L relation (0.517) is consistent with that of Hβ (0.492), though the ε y scatter is substantially greater (0.336 dex for C IV compared to 0.213 dex for Hβ). Definition of the relationship does not depend on the two separate measurements of very short C IV lag measurements for the dwarf Seyfert NGC 4395 . Thus it seems clear that we can use L(1350Å) as a reasonable proxy for τ (C IV). We show the relationship between the C IV line dispersion measured in the rms spectrum σ R (C IV) and the line dispersion in the mean spectrum σ M (C IV) in Figure 8. The best-fit coefficients are given in Table 3. The correlation is good. However, the correlation between FWHM M (C IV) and σ R (C IV), also shown in Figure 8, is rather poor (see also Wang et al. 2020) and demonstrates that FWHM M (C IV) is a dubious proxy for σ R (C IV). Measurement of FWHM M (C IV) is clearly not a reliable predictor of σ R (C IV), so we will not consider FWHM M (C IV) further. Single-Epoch Masses Following the same procedures as with Hβ, we use the RMDB data (Table A2) and the SDSS-RM data (Table A4) to fit the equation The resulting fit is shown in Figure 9 and the best-fit coefficients are given in Table 4. With the coefficients from this fit and equation (32), we can generate predicted virial masses µ SE (C IV). We compare the measured reverberation mass µ RM with the single-epoch prediction µ SE based on this fit in the left panel of Figure 9. As was the Lower panels: residuals after subtraction of the best fit in the panel above. The εy scatter in the residuals is 0.197 dex for the σM-based virial products and 0.204 dex for the FWHMMbased virial products. In all panels, the solid blue circles represent RMDB data (Table A1) and the open green triangles represent SDSS data (Table A3). The solid line shows the best fit to the data. The short dashed and long dashed lines are the ±1σ and ±2.6σ envelopes, respectively. The coefficients of the fits are given in Table 5. Error bars are measurement uncertainties only, without systematic errors. case for Hβ (Figure 4), the distribution of points is slightly skewed relative to the diagonal, and, guided by our result for Hβ, we plot the residuals in log µ RM − log µ SE versus Eddington ratioṁ in the upper left panel of Figure 10. The Eddington ratio for the UV data is logṁ = −33.737 + 0.9 log L(1350Å) − log µ RM , where again we have used a bolometric correction from L(1350Å) from Netzer (2019), We fitted equation (28) for C IV and the coefficients of the fit are given in Table 5. The offset between the residuals in the upper left panel of Figure 10 between the RMDB and VP06 data on one hand and the SDSS data on the other might seem to be problematic and we were initially concerned that this might be a data integrity issue. However, upon examining the distribution of mass and luminosity for these three samples as seen in Figure 11, we see clearly that the mass distribution of the SDSS sources is skewed toward much higher values than for the RMDB and VP06 sources, which Relationship between the C IV rest-frame emission-line lag τ (C IV) and the continuum luminosity at 1350Å. Blue filled circles represent RMDB data (Table A2) and green open triangles represent SDSS data (Table A4). The solid line is the best fit to the data using equation (10) with coefficients given in Table 2. The short dashed and long dashed lines are the ±1σ and ±2.6σ envelopes, respectively. The Spearman rank coefficient for these data is ρ = 0.503. If the two lowest luminosity points (both measurements of the dwarf Seyfert NGC 4395) are omitted, the Spearman rank coefficient is decreased to ρ = 0.481. are relatively local and less luminous than the SDSS quasars. We will thus proceed by examining mass residuals versus both Eddington ratio and µ RM . Figure 10 illustrates the process by which we eliminate the mass residuals in successive iterations. We compute the mass residuals ∆ log µ = log µ RM − log µ SE from equation (32); these are shown versusṁ (left column) and µ RM (right column). We fit these residuals versusṁ (top left) and subtract the best fit to equation (28), whose coefficients are given in Table 5. We subtract this fit from the mass residuals to get the corrected residuals in the middle panels. Examination of these residuals as a function of other parameters revealed that they are still correlated with µ RM (middle right), suggesting that the importance of the Eddington ratio depends on the black hole mass. We therefore fit the residuals a second time, this time as Figure 8. Left: Relationship between C IV line dispersion in the mean and rms spectra of reverberation-mapped AGNs. The Spearman rank coefficient is ρ = 0.873. Right: Relationship between FWHMM(C IV) and σR(C IV) for reverberation-mapped AGNs. The Spearman rank coefficient for these data is ρ = 0.524. In both panels, blue filled circles represent RMDB sources in Table A2 and green open triangles represent SDSS-RM sources in Table A4. The red dotted line shows the locus where the two line-width measures are equal. The solid line is the best fit to equation (10) and the coefficients are given in Table 3. The short dashed and long dashed lines show the ±1σ and ±2.6σ envelopes, respectively. Table A2 (blue filled circles), the SDSS-RM C IV reverberation data from Table A4 (green open triangles), and data from Table A5 (red open circles). The solid line is the best fit to the data and has slope 0.787 ± 0.041. As was the case with Hβ, masses are overestimated at the low end and underestimated at the high end, excepting the three very low mass measurements. Right: Comparison of single-epoch virial products after empirical correction as given in equation (40). In both panels, the solid line is the best fit to equation (32). The short dashed and long dashed lines define the ±1σ and ±2.6σ envelopes, respectively. The diagonal red dotted line is the locus where µRM and µSE are equal. The best fit to this equation is shown in the middle right panel and the coefficients are given in Table 5. Subtraction of the best fit yields the residuals shown in the bottom two panels. We would under most circumstances consider this procedure with some trepidation from a statistical point of view, since µ RM appears explicitly in one correction and is implicitly in the Eddington ratio. A generalized solution would have multiple degeneracies as both mass and luminosity appear in multiple terms. However, the residual corrections are physically motivated; several previous investigations have also concluded that Eddington ratio is Table 5. Note that the intrinsic scatter in this relationship is ǫy = 0.000 ± 0.000 because the error bars are so large. The bottom panels show the mass residuals versusṁ and µRM after subtracting the fit in the middle right panel. The scatter in the bottom panels is 0.138 dex. In all panels, the blue filled circles represent RMDB data (Table A2), the green open triangles are SDSS data (Table A4), and the red open circles are VP06 data (Table A5). Best fits are shown as solid lines and the short dashed and long dashed lines indicate the ±1σ and ±2.6σ envelopes. correlated with the deviation from the Bentz et al. (2013) R-L relationship, and the middle panels of Figure 10 suggests that the impact of Eddington ratio varies slightly with mass. Nevertheless, one would prefer to work with parameters that are correlated with or indicators ofṁ and µ RM , as we will discuss in §6. It is worth noting in passing that after correcting for Eddington ratio (Figure 5), the residuals in the Hβ-based mass estimates show no correlation with either mass or luminosity. Figure 11. Distribution in virial product µRM for the RMDB (Table A2, blue solid line), SDSS (Table A3, green dotted line), and VP06 (Table A4, red solid line) samples. The VP06 sample is a subset of the RMDB sample, which is dominated by the relatively low-mass Seyfert galaxies that were the first sources studied by reverberation mapping. The SDSS quasars are comparatively more massive and more luminous. COMPUTING SINGLE-EPOCH MASSES To briefly reiterate our approach so far, we started with the assumption that µ SE = f (R, L) only. This proved to be inadequate, so we examined the residuals in the log µ SE -log µ RM relationship and found that these correlated best with Eddington ratioṁ: fundamentally, at increasingṁ, the Bentz In the case of C IV, we found additional residuals that correlated with µ RM , although we cannot definitively demonstrate that some part of this is not attributable to inhomogeneities in the data base (a point that will be pursued in the future). While we believe this analysis identifies the physical parameters that affect the mass estimates, there are multiple degeneracies, with both mass and luminosity appearing in more than one term. Instead of trying to fit coefficients to all the physical parameters that have been identified, we can do a purely empirical correction to equations (16), (17), and (32) since the residuals in the log µ RM -log µ SE relationships (upper panels in Figure 4 and left panel of Figure 9) are rather small. We can combine the basic R-L fits (equations 16, 17, and 32) with the residual fits (equations 28 and 35) to obtain prescriptions that work over the mass range sampled. Renormalizing for convenience, we can Here f is the scaling factor which is discussed briefly in the Appendix, and ∆ log P is the uncertainty in the parameter log P . The intrinsic scatter in this relationship is 0.309 dex, and this must be added in quadrature to the random error. In this case, the intrinsic scatter is 0.371 dex. Here we assume f = 4.28 (Batiste et al. 2017). Bolometric corrections were made using equations (23) and (34). On the left side, the quality cuts of §2.1 have been imposed. On the right side, no quality cuts have been made. with associated uncertainty The intrinsic scatter in this relationship is 0.408 dex. Single-epoch predictions and reverberation-based masses for the AGNs in Tables A2, A4, and A5 are compared in the right panel of Figure 9. In Figure 12, we show the distribution in bolometric luminosity and black hole mass for the entire sample of SDSS-RM quasars for which Hβ or C IV single-epoch masses can be estimated. 6. DISCUSSION Single-Epoch Masses Our primary goal has been to find simple, yet unbiased, prescriptions for estimating the masses of the black holes that power AGNs. Our underlying assumption has been that the most accurate measure of the virial product is given by using the emissionline lag τ and line width in the rms spectrum σ R (e.g., equation A1 in the Appendix) as that quantity produces, upon adjusting by the scaling factor f , an M BH -σ * relationship for AGNs that is in good agreement with that for quiescent galaxies. Given that both τ and σ R average over structure in a complex system (cf., Barth et al. 2015), it is somewhat surprising that this method of estimation works as well as it does. Here we have shown that the broad component of the Hβ emission line is a good proxy for the starlight-corrected AGN luminosity (Figure 1). This is useful since it eliminates the difficult task of accurately modeling the host-galaxy starlight contribution to the continuum luminosity. Moreover, the line luminosity and σ R reflect the BLR state at the same time; a measurement of the continuum luminosity, by contrast, better represents the state of the BLR at a time τ in the future on account of the light travel-time delay within the system (Pogge & Peterson 1992;Gilbert & Peterson 2003;Barth et al. 2015); this is, however, generally a very small effect. For the sake of completeness, we also note that there is a small, but detectable, lag between continuum variations at shorter wavelengths and those at longer wavelengths ( We have also confirmed that, for the case of Hβ, both σ M and FWHM M are reasonable proxies for σ R , though σ M is somewhat better than FWHM M . On the other hand, the case of C IV remains problematic, as it differs in a number of ways from the other strong emission lines: 1. The equivalent width of C IV decreases with luminosity, which is known as the Baldwin Effect (Baldwin 1977); C IV is driven by higher-energy photons than, say, the Balmer lines and the Baldwin Effect reflects a softening of the highionization continuum. This could be due to higher Eddington ratio (Baskin & Laor 2004) or because more massive black holes have cooler accretion disks (Korista, Baldwin, & Ferland 1998). 3. BALs in the short-wavelength wing of C IV, another signature of outflow, are common (Weymann et al. 1991;Hall et al. 2002;Hewett & Foltz 2003;Allen et al. 2011). We remind the reader that in §2.1 we removed ∼ 17% of our SDSS C IV sample because the presence of BALs precludes accurate line-width measurements. 4. The pattern of "breathing" in C IV is the opposite of what is seen in Hβ (Wang et al. 2020). Breathing refers to the response of the emission lines, both lag and line width, to changes in the continuum luminosity. In the case of Hβ, an increase in luminosity produces an increase in lag and a decrease in line width (Gilbert & Peterson 2003; Goad, Korista, & Knigge 2004;Cackett & Horne 2006). In the case of C IV, however, the line width increases when the continuum luminosity increases, contrary to expectations from the virial theorem (equation 2). We must certainly be mindful that outflows can affect a mass measurement, though the effect is small if the gas is at escape velocity. Notably, in the cases studied to date there is good agreement between Hβ-based and C IV-based virial products ( Figure 7), though, again, these are local Seyfert galaxies that are not representative of the general quasar population. The C IV breathing issue is addressed in detail by Wang et al. (2020), building on evidence for a non-reverberating narrow core or blue excess in the C IV emission line presented by . In this two-component model, the variable part of the line is much broader than the non-variable core. As the continuum brightens, the variable broad component increases in prominence, resulting in a larger value of σ M . As the broad component reverberates in response to continuum variations, σ M will track σ R much better than FWHM M , thus explaining the breathing characteristics and why FWHM M is a poor line-width measure for estimating black hole masses. Physical interpretation of the non-varying core remains an open question: suggests that it might be an optically thin disk wind or an inner extension of the narrow-line region. The Role of Eddington Ratio It is well known that there are strong correlations and anticorrelations among the UV-optical spectral features of AGNs as revealed by Principal Component Analysis (PCA) (Boroson & Green 1992;Sulentic et al. 2000;Boroson 2002;Shen & Ho 2014;Sun & Shen 2015;Marziani et al. 2018, and references therein). The strongest of these correlations, Eigenvector 1, is most clearly characterized by the anticorrelation between (a) the strength of the Fe II λ4570 and Fe II λλ5190, 5320 complexes on either side of the broad Hβ complex and (b) the strength of the [O III] λλ4959, 5007 doublet. There is consensus in the literature that Eigenvector 1 is driven by Eddington ratio; our own analysis supports this. The studies cited above have noted that an Eddington ratio correction is required for single-epoch masses based on Hβ. We find, as did Marziani et al. (2019), that a similar correction is required for C IV-based masses as well. One extreme of Eigenvector 1 is populated by sources with strong Fe II and very weak [O III]. The broad emission lines in the spectra of these objects also have relatively small line widths. By combining the R-L relation with eq. (2), the line width dependence is seen to be whereṁ ∝ L/M is the Eddington ratio (eq. 26). Thus AGNs with the highest Eddington ratios have the smallest broad-line widths; many such sources are classified as "narrow-line Seyfert 1 (NLS1) galaxies" (Osterbrock & Pogge 1985). The Super-Eddington Accreting Massive Black Holes (SEAMBH) collaboration has focused on high-ṁ candidates in their reverberationmapping program (Du et al. , 2018Du & Wang 2019). An important result from these studies, as we have noted earlier, is that the Hβ lags are smaller than predicted by the current state-of-the-art R-L relationship ). This implies that in these objects the ratio of hydrogen-ionizing photons to optical photons is lower than in the lowerṁ sources; this is also consistent with the relative strength of the low-ionization lines such as Fe II in SEAMBH sources, the weakness of high-ionization lines, such as [O III], and their soft X-ray spectra (Boller, Brandt, & Fink 1996). Du & Wang (2019) choose to make their correction to the BLR radius through adding a term that correlates with the deficiency of ionizing photons. In our approach, we absorb the correction directly into the virial product computation. As noted in §4.2, from a statistical point of view, it would be preferrable to replace the Eddington ratio with a parameter strongly correlated with it. The PCA studies referenced above find that the ratio of the equivalent widths (EW) or fluxes of Fe II to Hβ, R = EW(Fe II)/EW(Hβ), correlates well with Eddington ratio. In the UV, it is also found that the C IV blueshift correlates with Eddington ratio (Baskin & Laor 2005;Coatman et al. 2016;Sulentic et al. 2017). However, we find that the scatter in these relationships is so large that any gain in the accuracy of black hole mass estimates is offset by a large loss of precision. We therefore elect at this time to focus on the empirical formulae given in §5. Future Improvements While we believe our current single-epoch prescription for estimating quasar black hole masses is more accurate than previous prescriptions, we also recognize that there are additional improvements that can be made to improve both accuracy and precision, some of which we became aware of near the end of the current project. We intend to implement these in the future. Topics that we will investigate in the future include the following: 1. Replace those reverberation lag measurements made with the interpolated cross-correlation function (Gaskell & Peterson 1987;White & Peterson 1994;Peterson et al. 1998bPeterson et al. , 2004 with lag measurements and uncertainties from JAVELIN (Zu, Kochanek, & Peterson 2011). Recent tests (Li et al. 2019;Yu et al. 2020) show that while the JAVELIN and interpolation cross-correlation lags are generally consistent, the uncertainties predicted by JAVELIN are more reliable. 2. Utilize the expanded SDSS-RM database, which now extends over six years, not only to make use of additional lag detections, but to capitalize on the gains in S/N that will increase the overall quality of the lag and line-width measurements and result in fewer rejections of poor data. Table A1 with recent results and other previous results that we excluded because they did not have starlight-corrected luminosities. Expand the database in 4. Update the VP06 database used to produce Table A5. There are now additional reverberation-mapped AGNs with archived HST UV spectra. Some of the poorer data in Table A5 can be replaced with higher-quality spectra. 5. Consider use of other line-width measures that may correlate well with σ line , but are less sensitive to blending in the wings. Mean absolute deviation is one such candidate. 6. Improve line-width measurements. There appear to be some systematic differences among the various data sets, probably due to different processes for measuring σ M ; for example, the bottom panels of Figure 10 show that the SE mass estimates for the VP06 sample are slightly higher than those from SDSS (compare also the last two columns in Table A5). Work on deblending alogrithms would aid more precise measurement of σ M , in particular. SUMMARY The main results of this paper are: 1. We confirm that the luminosity of the broad component of the Hβ emission line L(Hβ broad ) is an excellent substitute for the AGN continuum luminosity L AGN (5100Å) for predicting the Hβ emission-line reverberation lag τ (Hβ). It has the advantage of being easier to isolate than L AGN (5100Å), which requires an accurate estimate of the host-galaxy starlight contribution to the observed luminosity. 2. We confirm that the line dispersion of the Hβ broad component σ M (Hβ) and the full-width at half maximum for the Hβ broad component FWHM M (Hβ) in mean, or single-epoch, spectra are both reasonable proxies for the line dispersion of Hβ in the rms spectrum σ M (Hβ) for computing single-epoch virial products µ SE (Hβ). We find that σ M (Hβ) gives better results than FWHM M (Hβ), but both are usable. 3. In the case of C IV, we find that the line dispersion of the C IV emission line σ M (C IV) in the mean, or single-epoch, spectrum is a good proxy for the line dispersion in the rms spectrum σ R (C IV) for estimating single-epoch virial products µ SE (C IV). We find that FWHM M (C IV), however, does not track σ R (C IV) well enough to be used as a proxy. 4. Although the R-L relationship based on the continuum luminosity L(1350Å) and C IV emission-line reverberation lag τ (C IV) is not as well defined as that for Hβ, the relationship appears to have a similar slope and it appears to be suitable for estimating virial products µ SE (C IV). 5. We confirm for both Hβ and C IV that combining the reverberation lag estimated from the luminosity with a suitable measurement of the emission-line width together introduces a bias where the high masses are underestimated and the low masses are overestimated. We confirm that the parameter that accounts for the systematic difference between reverberation virial product measurements µ RM and those estimated using only luminosity and line width is Eddington ratio. Increasing Eddington ratio causes the reverberation radius to shrink, suggesting a softening of the hydrogen-ionizing spectrum. 6. While the virial product estimate from combining luminosity and line width causes a systematic bias, the relationship between the reverberation virial product µ RM and the single-epoch estimate µ SE is still a power-law, but with a slope somewhat less than unity (upper panels of Figure 4, left panel of Figure 9). We are therefore able to empirically correct this relationship to an unbiased estimator of µ SE by fitting the residuals and essentially rotating the power-law distribution to have a slope of unity (lower panels of Figure 4, right panel of Figure 9). We present these empirical estimators for µ SE (Hβ) and µ SE (C IV) in §5. DATABASE OF REVERBERATION-MAPPED AGNS Reverberation-mapped AGNs provide the fundamental data that anchor the AGN mass scale. We selected all AGNs from the literature (as of 2019 August) for which unsaturated host-galaxy images acquired with HST are available, since removal of the host-galaxy starlight contribution to the observed luminosity is critical to this calibration, and measurements of Hβ time lags. It is worth noting, however, that since our analysis shows that the broad Hβ flux is a useful proxy for the 5100Å continuum luminosity, this criterion is over-restrictive and we will avoid imposing it in future compilations. In many cases, there is more than one reverberation-mapping data set available in the literature. In a few cases, the more recent data were acquired to replace, say, a more poorly sampled data set or one for which the initial result was ambiguous for some reason. In other cases, there are multiple data sets of comparable quality for individual AGNs, and in these cases we include them all. The particularly wellstudied AGN NGC 5548 has been observed many times and in some sense has served as a "control" source that provides our best information about the repeatability of mass measurements as the continuum and line widths show long-term (compared to reverberation time scales) variations. The final reverberation-mapped sample for Hβ is given in Table A1. Bentz et al. (2013), for which the redshift-independent distances quoted in that paper are used. For two of these sources, NGC 4051 and NGC 4151, we use preliminary Cepheid-based distances (M.M. Fausnaugh, private communication), and for NGC 6814, we use the Cepheid-based distance from Bentz et al. (2019). Individual virial products for these sources are easily computed using the Hβ time lags (Column 6) and line dispersion measurements (Column 12) and the formula Further conversion to mass requires multiplication by the virial factor f , i.e. log M = log f + log µ, a dimensionless factor that depends on the inclination, structure, and kinematics of the broad-Hβ-emitting region -indeed, detailed modeling of 9 of these objects (Pancoast et al. 2014;Grier et al. 2017a) shows that f depends most clearly on inclination (Grier et al. 2017a). Since such models are available for only a very limited number of AGNs, it is more common to use a statistical estimate of a mean value of f based on a secondary mass indicator, specifically the well-known M BH -σ * relationship (Ferrarese & Merritt 2000;Gebhardt et al. 2000;Gültekin et al. 2009), where σ * is the host-galaxy stellar bulge velocity dispersion. The required assumption is that the AGN M BH -σ * is identical to that of quiescent galaxies (Woo et al. 2013). In fact, it is found that the µ-σ * has a slope consistent with the M BH -σ * slope for quiescent galaxies , and the zero points disagree by only a multiplicative factor, which is taken to be f . Here we take log f = 0.683 ± 0.150 (Batiste et al. 2017) where the error on the mean is ∆ log f = 0.030 -this error must be propagated into the mass measurement error when comparing AGN reverberation-based masses to those based on other methods. NOTE-Columns are 1: AGN name; 2: literature reference for data; 3: Julian Dates of observations; 4: redshift; 5: luminosity distance; 6: C IV time lag τ (C IV); 7: log continuum luminosity at 1350Å; 8: FWHM of C IV in the mean spectrum; 9: line dispersion of C IV in the mean spectrum; 10: line dispersion of C IV in the rms spectrum.
14,029.4
2020-07-06T00:00:00.000
[ "Physics" ]
Fas-associated protein with death domain (FADD) regulates autophagy through promoting the expression of Ras homolog enriched in brain (Rheb) in human breast adenocarcinoma cells FADD (Fas-associated protein with death domain) is a classical adaptor protein in apoptosis. Increasing evidences have shown that FADD is also implicated in cell cycle progression, proliferation and tumorigenesis. The role of FADD in cancer remains largely unexplored. In this study, In Silico Analysis using Oncomine and Kaplan Meier plotter revealed that FADD is significantly up-regulated in breast cancer tissues and closely associated with a poor prognosis in patients with breast cancer. To better understanding the FADD functions in breast cancer, we performed proteomics analysis by LC-MS/MS detection and found that Rheb–mTORC1 pathway was dysregulated in MCF-7 cells when FADD knockdown. The mTORC1 pathway is a key regulator in many processes, including cell growth, metabolism and autophagy. Here, FADD interference down-regulated Rheb expression and repressed mTORC1 activity in breast cancer cell lines. The autophagy was induced by FADD deficiency in MCF7 or MDA-231 cells but rescued by recovering Rheb expression. Similarly, growth defect in FADD-knockdown cells was also restored by Rheb overexpression. These findings implied a novel role of FADD in tumor progression via Rheb–mTORC1 pathway in breast cancer. INTRODUCTION Fas-associated protein with death domain (FADD) is the key adaptor protein transmitting apoptotic signals mediated by death receptors (DRs). It was originally identified in FAS-induced apoptosis [1][2][3]. Following DD interaction between FADD and FAS, the cytoplasmic procapase-8 binds to FADD through DED-DED interactions, and forms the death-inducing signaling complex (DISC). Besides being a main death adaptor molecule, FADD is also required for T cell proliferation. Several groups have demonstrated that FADD deficiency in peripheral T lymphocytes resulted in an inhibition of mitogen-induced T cell proliferation [4,5]. FADD deficiency also leads to a dysregulation of the cell cycle machinery. Recently, emerging evidences have shown that FADD expression was associated with tumor development [6]. Amplification of the 11q13, a chromosomal region containing the gene encoding FADD, is frequently observed in many cancer cells. Overexpression of FADD might be as a biomarker in head and neck squamous cell carcinoma [7,8]. FADD protein expression could contribute to disease progression in several malignancies, so the mechanism of FADD in tumorigenesis needs further investigated. At present, In Silico Analysis using Oncomine Database is a useful platform to gain the disease summary for FADD, and proteomics coupled with bioinformatics analysis provides a powerful tool for us to find the potential targets of FADD and its signaling pathway networks. In this study, we first reported that FADD expression was remarkably higher in breast cancer and applied LC-MS/MS detection plus bioinformatics analysis www.impactjournals.com/oncotarget to reveal that Rheb-mTORC1 pathway was dysregulated in breast cancer cells because of FADD knockdown. mTOR is a serine/ threonine kinase and functions as a key modulator in cell proliferation, protein synthesis, aging and autophagy [9,10]. The best-described target of mTORC1 is its downstream marker ribosomal S6 protein kinase 1 (p70s6k). p70s6k activation requires mTORC1mediated phosphorylation [11]. The mTORC1 activity is tightly regulated by a wide range of environmental signals. One key upstream activator of mTORC1 is the small GTPbinding protein Rheb (Ras homolog enriched in brain), which is the most well-known regulator of mTORC1 to date. Rheb promotes mTORC1 activity and enhances p70s6k phosphorylation in a rapamycin-dependent manner [12][13][14][15]. Latest studies show that Rheb-mTORC1 signaling axis is hyper-activated in a variety of human cancers and closely related to tumorigenesis [16,17]. Therefore, we performed further cell biological examinations on FADD knockdown to address the Rheb-mTORC1 pathway. Our data showed that FADD interference decreased Rheb expression on the transcriptional level. To explore the effect of FADD on Rheb-mTORC1 signaling axis, we detected the p70s6k phosphorylation for mTOR activity. The decrease of p70s6k phosphorylation was observed in FADD knockdown cells, which was rescued by recovered Rheb expression. Inhibition of autophagy is one important function of mTORC1. Similarly, the induction of autophagy by FADD deficiency was also rescued by recovering Rheb expression. Moreover, Rheb overexpression could improve cell growth which was retarded for FADD knockdown. Collectively, these data suggest a novel role of FADD in breast tumorigenesis through promoting Rheb expression. High expression of FADD in human breast cancer correlated with poor prognosis Oncomine platform (http://www.oncomine.org) is a free online bioinformatic resource of cancer transcriptome data. To gain an overview of FADD expression in human cancers, we performed analysis of published patient data using Oncomine and found that FADD mRNA level is significantly up-regulated in human breast cancer ( Figure 1A). In Curtis breast dataset with 2136 samples [18], FADD expression levels were upregulated in most of breast cancer tissues (n>1556, p=3.09E-13), compared with normal tissues (n=144) ( Figure 1B). To confirm the oncomine data, we analyzed FADD expression in a breast tissue microarray (TMA) containing 30 cases of breast specimens by Immunohistochemical (IHC) staining ( Figure 1C). High FADD expression was observed in 21 of 30 (70%) of tumor tissues compared with adjacent histologically normal tissues, suggesting elevated FADD expression might contribute to tumor development. Using Kaplan Meier plotter, another free online tool for metaanalysis based biomarker assessment [19], the result revealed that FADD-High expression in patients was correlated with a worse survival ratio compared with FADD-low counterparts (HR=1.6, logrank P=1e-15) ( Figure 1D). Collectively, these findings indicate that up-regulated FADD predicts a poor prognosis in breast patients and is closely correlated with tumor progression in breast cancer. LC-MS/MS based proteomics analysis in breast cancer cell To find out the molecular pathways directly or indirectly controlled by FADD in tumorigenesis of breast cancer, high throughput proteomic approaches was performed in human breast cell line MCF-7 with FADD knockdown. The expression of FADD was confirmed by western blotting shown in Supplementary Figure S1A. About 500 differentially expressed proteins were identified. We used the GeneGO/MetaCore software to analyze the biological networks related to these proteins. GeneGo Map Folder analysis was applied to predict top ten pathways in the highest significance in Figure 2A. Pathway in apoptosis and survival ranked first, which was consistent with the main function of FADD as an apoptotic protein. Among them, three pathways were linked with Rheb-mTORC1 signaling axis ( Figures 2B-2D). Further analysis on GeneGo process also showed that three of the top ten processes were linked to Rheb-mTORC1 signal axis (Supplementary Figure S1B). Since mTOR pathway is a key regulator of cell growth and proliferation, its deregulation might be an important clue for FADD function in breast cancer. Rheb expression decreased by FADD knockdown Proteomics analysis showed that the expressions of Rheb and mTOR were down-regulated in MCF-7 cells of FADD knockdown compared with control cells (Supplementary Table S1), which was confirmed by western blotting analysis ( Figure 3A). There was about 60% reduction of Rheb protein in MCF-7 cells treated with FADD siRNAs. No significant difference on mTOR expression was observed (Supplementary Figure S2). Similar result was reconfirmed in another breast cancer cell line MDA-MB-231 ( Figure 3B). With an increasing transfection of FADD siRNAs, the protein level of Rheb was decreased in a dose-dependent manner in both MCF-7 and MDA-MB-231 cells ( Figure 3C and 3D). Notably, Rheb expression was also elevated in breast TMA as well as FADD (Supplementary Figure S3), and reported to be correlated with poor prognosis in patients with breast cancer [20]. The protein level of Rheb had a good consistency with FADD expression in breast cancers. www.impactjournals.com/oncotarget The effect of FADD on Rheb transcription We next tested the effect of FADD on Rheb gene expression at the transcription level. After RNAi for FADD, Rheb mRNA was examined by qPCR assays in MCF-7 cells ( Figure 4A) and MDA-MB-231 ( Figure 4B). Consistent with the above data, Rheb mRNA was also decreased in a dose dependent manner when FADD became gradually reduced. The decrease of mRNA level is generally considered for two factors, mRNA stability and transcriptional activity. To examine the effect of FADD on the stability of Rheb mRNA, MCF-7 cells were transfected with FADD siRNA or control siRNA for 48 h and treated with actinomycin D (ActD) for indicated times, then harvested to quantify the Rheb mRNA by qPCR. As shown in Figure 4C, the degradation speed of Rheb mRNA showed no obvious differences in two groups. Then we constructed the promoter-luciferase reporter vector to analyze the transcriptional activity of Rheb. FADD interference inhibited the luciferase acitivity of Rheb promoter ( Figure 4D), suggesting the effect of FADD on Rheb expression at its transcriptional level. Meta-analysis based on biomarker assessment shows that High FADD expression versus low expression has a poor survival in human breast cancer. P-value is calculated using log-rank test. www.impactjournals.com/oncotarget mTORC1 activity regulated by FADD through Rheb Based on GeneGo Map analysis, Rheb-mTORC1 signaling axis became unusual because of FADD knockdown. p70s6k is a well-defined downstream of mTORC1 and its phosphorylation is a reliable measurement for mTORC1 activity [11,15,21]. Compared with transfection with NC siRNA, the level of p70s6k phosphorylation decreased to 60% in MCF-7 cells transfected with FADD siRNA ( Figure 5A) and 40% in MDA-MB-231 cells ( Figure 5B), respectively. The mTORC1 activity is tightly regulated by a wide range of environmental signals, including serum. MCF-7 cells were transfected with FADD siRNA for 24 h and then starved with DMEM without serum for 24 h followed by 20% serum stimulation for 15 min [22]. The stimulation of serum effectively enhanced p70s6k phosphorylation in MCF-7 cells with control siRNA, and no much phosphorylation shown in cells with FADD siRNA (Figure 5C), indicating an impairment of mTORC1 activity in FADD deficiency. To examine whether the influence of FADD on the mTORC1 activity is mediated by Rheb, we recovered Rheb expression in FADD-knockdown-MCF-7 cells via transfecting Rheb expression vector. p70s6k phosphorylation was rescued from 68% to 80% by supplement of Rheb expression ( Figure 5D), which was also observed in MDA-MB-231 Figure 5E). These findings suggested that Rheb was necessary for FADD modulation on mTORC1 activity. Autophagy induced by FADD silencing in human breast cancer cells One important function of mTORC1 is the inhibition of autophagy [23][24][25]. Considering the impairment of mTORC1 activity in FADD deficiency cells might initiate autophagy, we next detected autophagy using LC3B as a marker. During autophagy, LC3B I will be modified with phosphatidylethanolamine (PE) and converted to LC3B II, and the ratio of LC3B II to LC3BI is widely used to measure cellular autophagic activity [26,27]. LC3BI to LC3BII conversion was markedly increased in MCF-7 or MDA-MA-231 cells when FADD was knocked down (Figure 6A and 6B). Meanwhile, another autophagosomal marker p62 expression was observed in 40% to 60% reduction accompanied by FADD reduction, which is reported to degrade during autophagy. When autophagy was induced by starvation, stronger autophagy activity was also observed in cells of FADD knockdown (Supplementary Figure S4). For avoiding the artificial effect, we tried another interference technique Crispr/Cas9 to effectively downregulate FADD expression and obtained similar results (Supplementary Figure S5). Furthermore, GFP-LC3 was used to display the images of autophagy. The formation of GFP-LC3 labeled vacuoles increased significantly in MCF-7 cells with deficient FADD, and the quantitation of GFP-LC3-punctate cells were shown in Figure 6C. Furthermore, autophagy induced by FADD silencing was confirmed by the morphological change using transmission electron microscopy (TEM) analysis ( Figure 6D). There were more double-membrane cytoplasmic vacuoles (arrowheads) in cells transfected FADD siRNA than control siRNA, indicating the stronger autophagic activity. Chloroquine (CQ) inhibits autophagy as it leads to inhibition of both fusion of autophagosome with lysosome and lysosomal protein degradation. CQ treatment resulted in the accumulation of LC3B and p62, but did not change the appearance of the more conversion of LC3B I to LC3B II and lower expression of p62 in FADD-knockdown cells. These data demonstrated that FADD interference promoted the occurrence of autophagy in the early stage. The autophagy mediated by FADD via Rheb-mTOR pathway To examine whether Rheb is necessary for FADDmediated autophagy, Rheb expression vector was cotransfected with FADD siRNA or control siRNA, respectively. The ratio of LC3B II/LC3B I was declined and p62 expression was increased when Rheb overexpression in both MCF-7 cells ( Figure 7A) and MDA-MB-231 cells ( Figure 7B). By fluorescence image, we observed that both the number of dots inside cells and the percentage of cell with GFP-LC3 puncta-formation were decreased because of Rheb overexpression, especially in FADD knockdown cells ( Figure 7C). Transmission electron microscopy analysis further revealed that the autophagosome formation mediated FADD interference was inhibited by Rheb overexpression ( Figure 7D). These data provided evidences for the role of FADD in autophagy via Rheb-mTOR pathway. The crosstalk of cell proliferation and autophagy linked by FADD The mTOR pathway integrates signals from nutrients and growth factors to regulate many progresses, including autophagy and cell proliferation. It interested us to figure out the influence of FADD on these two progresses. Two siRNAs of ATG5 was designed for blocking autophagy at early stage. #2 ATG5 siRNA seemed a more effective candidate and was thus used in later experiments ( Figure 8A). As expected, ATG5 interference reduced the autophagic activity ( Figure 8B). Then we monitored cell growth using Real-Time Cell Analysis (RTCA), which is a novel approach to assess cellular proliferation. The slope processed by software represents the growth rate showed in Figure 8C. At 48 h after transfection, there was no significant difference on cell proliferation among four groups. Extended to 96 h post transfection, the growth of cells treated with FADD siRNA was obviously restrained. Meanwhile when cells treated with double siRNAs of FADD and ATG5, it showed partly restored in cell proliferation ( Figure 8C). For continuous culture for 96 h without supplement of fresh culture medium, nutrient deficiency would induce autophagy. At this time, the defect on cell proliferation by FADD siRNA was partly rescued by addition of ATG5 siRNA, suggesting that autophagy induced by FADD deficiency might be one important reason for cell growth defect. We further examined whether Rheb overexpression would effectively recover the impairment of cell proliferation mediated by FADD deficiency. Consistent with the previous results, Rheb also improved cell proliferation of FADDdeficient cells ( Figure 8D). DISCUSSION Recently, amplification of FADD has been observed in many different types of cancer and links to cancer progression [28][29][30][31]. Here, we provided evidence that FADD overexpression correlated with poor outcome in human breast cancer for the first time. With the help of high-throughput proteomics and bioinformatics analysis, the Rheb-mTORC1 pathway was predicted to be dysregulated in human breast adenocarcinoma cell line MCF-7 when FADD was knockdown (Figure 2). Rheb has been regarded as a novel prognostic factor in human cancer for that it activates the key metabolic regulator mTORC1. Elevated Rheb expression has been reported in a wide variety of tumors and coupled with mTORC1 hyper-activation, including human breast cancers [34,57,58]. In our study, Rheb downreglation by FADD deficiency was validated in human breast cancer cell lines MCF-7 and MDA-MB-231 (Figure 3), as well as the impairment of mTORC1 activity ( Figure 5). Like FADD, high Rheb expression is also correlated to poor prognosis in human breast cancer [20]. FADD is much more than an instrument of death and implicated in embryonic development, cell proliferation, tumor progression, inflammation, necrosis, and autophagy. However, the most important function of FADD is a pro-apoptotic adaptor. We previously reported that FADD protein had the potential to highly oligomerize. FADD self-aggregated in vitro and transfected FADD in mammalian cells effectively induced apoptosis by forming death effector filaments independent of receptor cross-linking at the plasma membrane [40]. The apoptosis induced by FADD overexpression also appeared in breast cancer cells (shown in Supplementary Figure S6), so RNA interference of FADD as a reasonable and practical approach of studying the effect of FADD protein expression level was widely used in our studies. In order to consolidate this conclusion of Rheb expression regulated by FADD, the results were fully verified in FADD-knockout MEF cells shown in Supplementary Figure S7. FADD could up-regulate Rheb expression and activate mTORC1 activity. The mTORC1 activity is tightly related with cellular processes like autophagy and cell proliferation [32]. FADD interference induced autophagy by downregulating Rheb-mTORC1 activity, which was restored by recovering Rheb expression. Similarly, the proliferative deficiency caused by FADD silencing was also rescued by Rheb overexpression. Our findings indicated that Rheb might play an important role in the function of FADD on tumorigenesis. Growing evidence has shed light on the role of autophagy in proliferation and tumorigenesis. ATG5-/-CD4+ and CD8+ T cells failed to undergo efficient proliferation after TCR stimulation [33]. However, unrestricted autophagy impairs cell proliferation [34,35]. Mice with systemic deletion of ATG5 and liverspecific ATG7-/-mice were reported to develop benign liver adenomas, together with elevated cell proliferation [36]. Similarly, ATG5 interference inhibited autophagy and partly rescued the proliferative deficiency in FADD knockdown cells. This might be a possible way in the proliferative role of FADD via regulating autophagy. In conclusion, our study strengthened the role of FADD in human breast tumorigenesis. FADD upregulates Rheb expression and promotes mTORC1 activity. Activated mTORC1 augments cell proliferation via autophagy inhibition. This finding helps to enrich the multifunction of FADD, and more importantly, represent a promising target for breast cancer therapy. siRNAs and transfection All synthetic siRNAs and the negative control (NC) were purchased from Shanghai GenePharma Co. Ltd. Luciferase reporter assay MCF-7 cells were transfected with siRNA/NC firstly and then cotransfected with Rheb-promoter luciferase and control pRL reporter for 24 h. Luciferase activities were measured consecutively by using Dual-Luciferase assays (Promega, USA). All measurements were normalized www.impactjournals.com/oncotarget for Renilla luciferase activity to correct the variations in transfection efficiencies. LC-MS/MS analysis and bioinformatics analysis The LC-MS/MS analysis was performed as previously described [37]. Protein samples were carried out using the bicinchoninic acid (BCA) method. Equal amount of protein (200 μg) was used for iTRAQ labeling according to the manufacturer's instructions. Raw MS/MS data were analyzed by the Agilent G2721AA Spectrum Mill MS Proteomics Workbench (Rev A.03.03.078) in the UniProtKB/SWISS Prot database for protein identification. The network building tool MetaCoreTM version 5.4 (GeneGo) was used to establish potential signaling network. Tissue microarray analysis Tissue microarray (TMA) of breast cancer was purchased from Shanghai Outdo Biotech Co. Ltd. Specimens included stage II or III invasive ductal cancer (n=30) and adjacent normal tissue (n=30). TMA immunohistochemical analysis was performed as previously described [38]. The quantitative analysis of FADD and Rheb staining was applied with Image-Pro Plus software. Transmission electron microscopy assay MCF-7 cells were cultured in 100 mm dishes and co-transfected with FADD siRNA or NC siRNA and Rheb expression vector. After 48 h, cells were harvested and washed with cold PBS once in a 1.5 ml microcentrifuge tube. Cells were fixed with 0.25% glutaraldehyde at 4°C overnight. Then samples were observed under transmission electron microscopy (Hitachi, Japan). Real-time cell analysis (RTCA) of cell proliferation The procedure was described previously [39]. Briefly, cells were digested and counted with Automated Cell Counter (Invitrogen, USA). 5,000 cells of each group were seeded to modified 16-well plates (E-plate, Roche, Germany) and monitored using the xCELLigence RTCA DP instrument (Roche, Germany). Data collecting and analysis was in accordance with the manufacturer's guidelines.
4,436.6
2016-03-22T00:00:00.000
[ "Biology", "Medicine" ]
Ruthenium Organometallic-bromophenol Blue Pairs in Hydrogel Sensing Matrix for Dissolved Ammonia Förster Resonance Energy Transfer Sensor Monitoring dissolved ammonia (DA) in tilapia fish farming ponds is important because a DA concentration as low as 100 parts per billion (ppb) depresses tilapia food intake and growth. Hence, a DA sensor capable of sub-ppm-level detection based on the Förster resonance energy transfer (FRET) of a single donor–acceptor pair has been developed to determine DA in tilapia breeding ponds. Certain mole ratio amounts of ruthenium organometallic complex (donor) to bromophenol blue (acceptor) were immobilized in hydrogel, polyvinyl chloride, and polysiloxanes membrane matrices. The hydrogel membrane matrix showed the best performance. A DA sensitivity at the ppb level has been achieved on a membrane of 2–3 μm thickness. The FRET sensing membrane performed reversibly with a response time of 10 min at 100 ppb DA in phosphate buffer (pH 11) with 100% (8 ppm) dissolved oxygen content in the solution. The pH and other amines did not interfere with the selectivity toward DA. Hence, the membrane has been applied to determine the DA concentration in tilapia fish ponds with 80% accuracy as compared with the Nessler method. Introduction Dissolved ammonia (DA) is naturally found and is an important species present in landed fish and marine aquaculture ponds. High concentrations of DA in fish farming ponds originate from environmental pollution due to agricultural and industrial wastes and solid waste leachates, resulting in unhealthy fish and marine aquatic species and affecting the quality and production of marine products. (1) Continuous monitoring of DA, pH, dissolved oxygen (DO), temperature, and salinity is important because of their possible effects on organisms' health, feed utilization, growth rates, and stocking densities. For example, an increase in ammonia level results in potentially fatal pathophysiological damage to the gills and kidneys, and neurotoxicity, hyperventilation, and convulsions have been observed. (2) The indophenol blue or Berthelot reaction is commonly used for water analysis. To date, it has been developed into an integrated microfluidic system but requires significant consumptions of reagents and works slowly. (3) Electrochemistry is a popular analytical approach but experiences interference from the salinity of water samples and marine corrosion. Many methods based on optical intensity (fluorescence and absorption) have been developed for ammonia detection, (4)(5)(6)(7)(8) but they may suffer from optical path displacement and photobleaching of the optical probe. A phasebased method has been introduced to overcome these drawbacks, in which the optimized light intensity thus reduces sensor photobleaching. Moreover, the Förster resonance energy transfer (FRET) method is more robust with stable measurement reading than the intensity-based method as it is less sensitive to the optical alignment of reflected light from the optical probe onto the spectrophotometer. The principle behind the optical chemical ammonia sensor described here is based on the Förster distance of the luminophore or donor (ruthenium organometallic complexes) from the chromophore or acceptor (e.g., pH-sensitive indicator). An optimum Förster distance results in the efficient overlap of excitation and emission spectra of both donor and acceptor because the luminophore possesses a longer decay time, and nonradiative energy is transferred to the acceptor after it reacts with DA. The frequency domain method of the measuring phase and modulation is advantageous because the luminophore photobleaching usually occurs in an intensity-based measurement. The concept of dynamic quenching allows the determination of DA concentration based on the observed phase shift of the decay time of the total emission signal. This mechanism is termed dynamic or collisional quenching, where the quenching agent (Q) (acceptor) accepts nonradiative energy transfer from the donor. (9) In this study, an optochemical sensing probe was embedded on a bundle of fiber optic cables connected to a time/frequency detector, which minimize displacement of the optical path length. The advantages of a FRETbased optical chemical ammonia sensor are that it is relatively cheap to implement, enables rapid detection, and is robust in marine environments compared with an optical intensity-based setup. Optical membrane preparation 0.4 mg of BPB, 0.3 mg of ruthenium, and 0.4 mg of hydrogel D7 were weighed inside a vial. Then, 4 mL of DMF was pipetted and transferred into the vial. The vial containing the cocktail was sealed with parafilm and aluminium foil. Then, the cocktail was sonicated for 2 h. An O ring of 0.6 cm diameter was arranged on the biaxially-oriented polyethylene terephthalate (BoPET) as shown in Fig. 1(c). 10 μL of the cocktail solution was pipetted using a micropipette and was drop-cast onto the BOPET to ensure that the cocktail was on the center of the O ring. A transparent membrane 2 to 4 µm thick with an internal diameter of 6 mm was produced as shown in Fig. 1(a). The sensing membrane was mounted and tightly sealed with polytetrafluorethylene (PTFE) and positioned at the end of an optical fiber tip as shown in Fig. 1(b). Instrumentation The fluorescence emissions were measured using a high-resolution spectrometer (HR4000 UV-VIS, Ocean Optics, USA). The excitation light source was emitted by a blue pig-tailed LED (LE-3B, WT&T, South Korea) at 455 nm triggered from a custom controller board. The controller allowed the blue LED to be triggered externally by a function generator (AFG3102, Tektronix, USA) at 45 kHz. This initial modulation phase reference signal was measured using a lock-in amplifier (Signal Recovery, 7265, Ametek Scientific Instrument, USA). The sensor membrane was attached at the end of a bifurcated fiber probe (BIFBORO-1000-2, Ocean Optics, USA). The blue light source propagated through this bifurcated fiber and excited the sensor membrane. The fluorescence emission was measured concurrently by a photodetector and converted to an electrical signal. The returning signal was detected by the lock-in amplifier. A set of optical bandpass filters (FD1B, FD1R; Newport, USA) was placed before the photodetector to ensure that only red fluorescence emissions coming from the sensor membrane were measured and to increase the signal-to-noise ratio (SNR). The experimental setup is depicted in Fig. 2. The spectra were observed using a UV spectrophotometer (Lambda 35, Perkin Elmer, USA) and spectrofluorometer (LS 55 Perkin Elmer, USA). GraphPAd Prism version 6 was used to manage the data. Donor emission and acceptor absorption spectra DA diffused into the sensing membrane, reacted with the protonated pH indicator (IndH), and deprotonated the indicator (Ind − ) to form the ammonium ion (NH 4 + ) as a counter ion, as in Eq. (1). The unprotonated indicator has an absorbance maximum at 609 nm [ Fig. 3(a)], while the protonated form was observed at 440 nm in agreement with Meier et al. (9) The effect of DA concentration on the UV absorbance of both protonated and unprotonated indicators is shown in Fig. 3(b). As the DA concentration increases, the absorbance of the deprotonoted indicator increases and that of the protonated indicator decreases. A DA absorbance calibration curve showed a linear relationship in agreement with Beer's Law. Interestingly, excitation of Ru (dpp) 3 in the range from 450 to 470 nm was observed using UV-vis. This excitation wavelength range was further chosen as the excitation wavelength in the spectrofluorometer to obtain approximately Ru(dpp) 3 maximum emission intensity at 609 nm. An efficient overlap of the Ru(dpp) 3 emission and unprotonoted indicator absorption enables an efficient energy transfer in the FRET system. The overlapped emission and absorption spectra increased as the DA concentration increased. Spectral overlap was verified using spectrophotometers before using the FRET system because FRET failure alone would not enable the evaluation of incorrect donor-acceptor orientation or donor-acceptor distance. Optimization of DA FRET optical chemical sensor Several parameters of transmissive optical membrane properties were identified to obtain good FRET. They are the polymer chosen as the donor-acceptor immobilization medium, As tabulated in Table 1, the polysiloxane sol-gel matrix exhibited the lowest quenching value. The PVC polymer showed a moderate value and the polyether polyurethane (hydrogel) has the highest Stern-Volmer dynamic quenching constant, which makes it advantageous as a membrane matrix for luminescent probes with longer decay times and results in high sensitivity towards DA. The high water content of hydrogels renders them biocompatible and allows DA diffusion through the polymer network. (11) The hydrogels were dissolved in tetrahydrofuran in which the Ru(dpp) 3 emission peak was maintained at λ max = 609 nm. Other solvents, namely, EtOH, MeOH, and DMF, shifted the λ max of the Ru (dpp) 3 emission peak to 615, 618, and 627 nm, respectively, which resulted in an insufficient overlap of donor emission and acceptor absorption spectra, thereby decreasing the FRET efficiency as shown in Fig. 4. A PVC membrane plasticized by DOS performed better than NPOE as NPOE contains nitro groups that quench Ru(dpp) 3 . However, the inhomogeneity of water-saturated plasticized PVC caused not only the red emission light propogated into the fiber optical core (FOC) but also decreased FRET efficiency between donor and acceptor. Sol-gels have become increasingly promising membrane matrices for optical sensors. Therefore, we synthesized a sol-gel-based membrane matrix as described elsewhere with minor modifications. (12) The FRET signal was not continuous and displayed a high SNR owing to the rigidity of the sol-gel that resulted in poor transmission. The sol-gel rigidity can be resolved by manipulating the hydrolysis and condensation of tetraethoxisilane in the sol-gel network. (13) Analytical performance of DA sensor The phase shift and modulation of emissions depend on the relative values of the lifetime and the light source modulation frequency. A modulation frequency of 45 KHz was applied and a Ru(dpp) 3 luminescent decay time of 3 to 4 µs in the absence of DA was obtained. A series of DA concentrations were spiked into test solutions to establish a linear relationship (y = 1.5144x − 1.4691, R 2 = 0.9973) in the DA concentration range of 0-100 ppb with shifted phase angle (Δ°). It was observed that increased the DA concentration resulted in an increased shifted phase angle because the Ru(dpp) 3 donor lifetime was quenched by the BPB acceptor. An overall phase-shifted signal change of 3.7 ± 1.0° was observed over the DA concentration range of 0 to 100 ppb. The hydrogel host matrix itself was not responsive to either pH or DO. The Ru(dpp) 3 donor and BPB acceptor used in the sensor matrix are prone to pH and oxygen interferences. The sensor was exposed to a different level of pH buffer ranging from pH 4 to 11. The sensor exhibited a nonlinear phase-shifted signal, and lifetime responses indicated that the sensor has low sensitivity to pH. Additionally, a phosphate buffer of pH 11 gave optimum performance owing to the nature of the dissociation of the ammonium ion in aqueous solution at pHs above pH 9.7. The sensing membrane did not respond to pH changes from pH 4 to 9 as the sensing membrane was sealed with a 50-µm-thick PTFE membrane, which overcame cation interferences. This sensor is fully reversible as shown by the forward and backward slopes that were statistically compared using ANOVA (P < 0.05). Generally, some types of ammonia sensors based on pH indicators also respond to other uncharged amines. Hence, we examined the interference effects of dicyclohexylamine (pK a : 10.40), urea (pK a : 0.10), and butylamine (pK a : 10.60) towards DA (pK a : 9.25) at the same concentration, which is 500 ppb. All these species did not interfere with the DA sensor, and the log K opt values determined (defined as the log of the ratio of the interfering signal and analyte signal at the same concentration) were 1.17 for ammonia-dicyclohexylamine, 1.18 for ammoniaurea, and 1.2 for ammonia-butylamine. The optical DA sensor has been demonstrated to be operational in the specified range of 10 to 40 °C. The sensitivity slope of the DA sensor became lower at temperatures above 40 °C. The drawback of using Ru(dpp) 3 as a donor is that Ru(dpp) 3 is prone to oxygen quenching. A t-test ANOVA was used to compare the effect of sensors for a series of DA concentrations at two oxygen saturation levels, i.e., 0 and 100% DO. An H o hypothesis was defined as no sensor sensitivity slope differences observed in 0 and 100% DO, while an H A hypothesis stated the opposite. It was observed that the H o hypothesis was true and sensors showed no significant effects in the presence of DO (P-value > 0.05). The donor lifetime was proportionally decreased under both DO conditions. The stability of the DA sensor was measured by leaving multiple DA sensors in a 50 ppb DA solution for 9 d. The DA sensor donor lifetime decreased from 3.81 to 3.25 µs after being exposed for 4 d and then remained stable at 3.11 µs until day 9. This could be due to the photobleaching of the BPB dye, which was demonstrated in the absorbance study. Sensor deployment Ammonia is toxic to blue tilapia at concentrations above 2.5 ppm, (14) at which level it causes tissue damage, which later causes susceptibility to diseases. The DA sensor developed may be used to determine the actual DA content in a controlled tilapia fish breeding pond and was verified using the Nessler standard method to comply with test B in ASTM146-05. (14) The ammonia sensors have shown 80% accuracy, which is comparable to that of a standard Nessler method. Nessler's reagent was used to determine the concentration of ammonia in the sample solutions using the direct nesslerization technique. The test solutions were added with 1 ml of Nessler reagent and analyzed using a UV-vis spectrophotometer. Conclusions A DA sensor based on FRET lifetime and capable of determining sub-ppm concentrations has been successfully developed for in situ water monitoring in tilapia fish breeding ponds. Although the Nessler method was used as a standard analytical method for DA, this sensor has demonstrated better analytical performance and a low detection limit (10 ppb) with a sensor accuracy of 80%, which is comparable to that of a standard Nessler procedure. The hydrogel outperformed PVC and sol-gel as a polymeric membrane matrix where immobilized Ru(dpp) 3 and bromophenol blue pairs worked well as the donor and acceptor, respectively, in the FRET system. In addition, the solid-state sensing material optimization study showed sufficient FRET donor-acceptor overlapping spectra with a high quenching constant; K D of 0.4927 was achieved with a hydrogel Ru(dpp) 3 -BPB-based sensing matrix. The FRET optical sensing system was optimized by studying frequency modulation, sensitivity, and interference with respect to pH, oxygen, and amines. The DA sensor has been used to measure the DA content of a controlled tilapia fish breeding pond with a measured DA level below 500 ppb. About the Authors Zainiharyati Mohd Zain obtained a Bachelor of Industrial Chemistry degree with a minor in Management in 1997, a MSc (Chemistry) degree in 2005, and a Ph.D. (Electroanalytical Chemistry) degree in 2010 from Universiti Sains Malaysia. Her research interests are mainly on sensors ranging from implantable microelectrodes for neurochemicals, DNA biosensors for cancer genes, optical FRET-based sensors, electrochemical sensors for gunshot residues, and immunosensors for Alzheimer's biomarkers.
3,474.8
2018-01-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
EXPERIMENTAL STUDIES OF BEAM-BEAM EFFECTS IN THE TEVATRON The long-range beam-beam interactions limit the achievable luminosity in the Tevatron. During the past year several studies were performed on ways of removing the limitations at all stages of the operational cycle. We report here on some of these studies, including the effects of changing the helical orbits at injection and collision, tune and chromaticity scans and coupling due to the beam-beam interactions. GENERAL OBSERVATIONS The Tevatron is currently colliding 36 proton against 36 antiproton bunches, where either beam consists of 3 equally spaced trains of 12 bunches in a common single vacuum chamber. The two beams are separated by a helical orbit except at the two locations of High Energy Physics (HEP) experiments, where they collide head on. The beam-beam effect gains more and more importance as the Tevatron beam intensity continues to increase in the quest for higher luminosity [1]. Recently, the total beam intensities injected into the Tevatron has been slightly over 10×10 protons and 1.2×10 antiprotons, which, subsequently, yielded a peak luminosity of 43×10cms with over 8×10 protons and 0.9×10 antiprotons in collision. Intensive studies were carried out in recent months to understand and quantify limitations from beambeam effects. Beam Loss Through Shot The typical beam loss through the shot setup is shown below in Figure 1. The red curve is the energy of the Tevatron, the green curve is the proton total bunch intensity, and the black is the total antiproton bunch intensity. The poor proton lifetime is caused by a limited dynamic aperture. Figure 1: Beam intensity and loss during injection, ramping and squeeze During ramping, there are about 10~12% antiproton loss and about 5~7% proton loss. There also is about 5% combined beam loss during low beta squeeze and halo removing process. The losses are smaller for shorter bunches (~30% of longitudinal emittance reduction will reduce losses to ~ 34%) and for smaller transverse emittance (almost no losses occur if the transverse emittances are less than 12 pi mm mrad, while typical emittances are 20-25 pi mm mrad). Antiproton losses are much higher in case of insufficient separation. GENERAL OBSERVATIONS The Tevatron is currently colliding 36 proton against 36 antiproton bunches, where either beam consists of 3 equally spaced trains of 12 bunches in a common single vacuum chamber. The two beams are separated by a helical orbit except at the two locations of High Energy Physics (HEP) experiments, where they collide head on. The beam-beam effect gains more and more importance as the Tevatron beam intensity continues to increase in the quest for higher luminosity [1]. Recently, the total beam intensities injected into the Tevatron has been slightly over 10×10 12 protons and 1.2×10 12 antiprotons, which, subsequently, yielded a peak luminosity of 43×10 30 cm -2 s -1 with over 8×10 12 protons and 0.9×10 12 antiprotons in collision. Intensive studies were carried out in recent months to understand and quantify limitations from beambeam effects. Beam Loss Through Shot The typical beam loss through the shot setup is shown below in Figure 1. The red curve is the energy of the Tevatron, the green curve is the proton total bunch intensity, and the black is the total antiproton bunch intensity. The poor proton lifetime is caused by a limited dynamic aperture. During ramping, there are about 10~12% antiproton loss and about 5~7% proton loss. There also is about 5% combined beam loss during low beta squeeze and halo removing process. The losses are smaller for shorter bunches (~30% of longitudinal emittance reduction will reduce losses to ~ 3-4%) and for smaller transverse emittance (almost no losses occur if the transverse emittances are less than 12 pi mm mrad, while typical emittances are 20-25 pi mm mrad). Antiproton losses are much higher in case of insufficient separation. Antiproton Only A study with antiprotons only proved that the antiproton loss on the ramping is caused by the beambeam effect. Figure 2 shows the antipron intensity during ramping and squeezing. No bunched beam loss (red curve) is observed; some DC beam (black curve) is lost during ramping. Comparing the antiproton losses in Figures 1 and 2, we conclude that the loss is mainly caused by the presence of strong proton bunches. Inproving Sequence #13 The antiproton loss due to insufficient separation was also a problem at the initial stage of Tevatron Run II operation as shown in Figure 3. There was a huge antiproton loss during the ramp when the Tevatron energy was greater than 500 GeV in ramping sequence #13, which is illustrated by the cyan curve (C:FBIANG) on the left of Figure 3. After painstaking studies, the problem was identified as an insufficient beam separation. The solution was one more ramping break point with increased beam separation, which was added between ramping sequences #13 and #14. After this improvement, the loss due to beam-beam effects was eliminated as shown in the similar graph on the right of Figure 3. TUNE MEASUREMENT There are basically 3 systems for tune measurement in the Tevatron [2]. These include two Schottky systems, an old one at 21MHz and a new one at 1.75GHz, and the tune meter which measures the beam betatron oscillation frequency by FFT and is able to measure the tune bunch by bunch. The beam can be excited using either a stripline kicker or the Tevatron electron lens [3]. Tune Scan One can find the best working point for protons and antiprotons by a tune scan. Figure 4 is the result of one of the tune scans carried out at the end of a HEP store. The tune scan shown in the figure above displayed the loss versus the working point. The proton and antiproton losses were measured by the CDF detector. The top figure shows the loss rate of protons and the bottom one that of antiprotons. The blue color in the above graph indicates small losses and the crosses stand for the initial nominal working point at the time of the tune scan study. It indicates that there is some room for daily operational optimization of the working point. Tune Diagram By exciting or gating individual bunches, we can measure the bunch by bunch tune of the 36×36 bunches. The graph below shows the tunes of the 36 antiproton bunches at the end of the store. For this study, we 'tickled' each antiproton bunch and measured the tune spectrum by the Schottky signal monitor. We found that the vertical tune of the first bunch in the train of 12 bunches was lower and a few bunches at the end of the train had higher tune. The magnitudes of the the tune shifts and the fact that leading and trailing bunches are strongly affected agree with the simulations[1] [4]. The first bunch had a lower tune because it missed one long range beam-beam collision and the higher tune was due to the fact the last few bunches had smaller emittances. The Figure 6 shows the usual working points of the Tevatron for proton and antiproton bunches. The tunes of both beams overlap 12 th order resonance lines. Emittance Scallops Another beam-beam effect observed in the Tevatron is that occasionally antiproton emittances blow up when the tunes of the antiproton bunches are not optimized. The emittance blows up slowly, over about 5~15min. Afterwards we observe an enhanced emittance variation from bunch to bunch, which is shown in Figure 7, and is called the emittance scallop effect. The emittance blow up degrades the luminosity. Thus, some studies were carried out trying to use the TEL as beam-beam compensation to eliminate these emittance blow ups [5]. Moreover, we must be careful in choosing the Tevatron working point. Unfortunately, the tunes and coupling, etc. are drifting in time [6]. It takes a lot of effort to re-optimize the Tevatron in every other week. EFFECTS OF HELIX SIZE One of the most direct studies on long range beambeam effect was carried out by investigating the effects of the helix size on the colliding beams. Figure 8 below shows the proton and antiproton losses versus the helix sizes of the Tevatron beam orbit for 980GeV at the end of HEP store. The loss rates are low and approximately flat within 82~110% of the nominal helix size, and they went up as the helix size is decreased, which agrees with theoretical expectations, since much stronger long range beam-beam effects cause larger beam-beam tune shifts and shift the more of the beam onto nonlinear resonances, as the beam separation gets smaller. EFFECTS OF THE CHROMATICIY The chromaticity is the main factor which causes proton beam losses during injection at 150GeV when the proton beam is on the helical orbit. Generally, the lower the chromaticity, the less is the proton loss. The Figure 9 above shows a chromaticity scan at injection energy on the proton orbit for coalesced bunches. One can see that when the horizontal chromaticity was lowered to 1 unit, we had minimum losses. However, we could lower the vertical chromaticity onuly down to 4 units, where the head-tail instability occurred. For normal Tevatron operation, we now apply transverse beam feedbacks, in order to be able to lower the chromaticity to improve the proton lifetime at injection. SUMMARY Beam-beam effects are the key to further Tevatron Run II upgrades. Intensive studies were carried out on beam separation scheme, working points, beam emittance control, etc. We have plans and a schedule for further beam-beam studies aimed at improving the Tevatron performance with increasing beam intensity, so as to provide a higher luminosity and stable operation. WrvphyÃ8uhvpv G à h r à C
2,240
2003-05-12T00:00:00.000
[ "Physics" ]
Numerical investigation of algebraic oceanic turbulent mixing-layer models In this paper we investigate the finite-time and asymptotic behaviour of algebraic turbulent mixing-layer models by numerical simulation. We compare the performances given by three different settings of the eddy viscosity. We consider Richardson number-based vertical eddy viscosity models. Two of these are classical algebraic turbulence models usually used in numerical simulations of global oceanic circulation, i.e. the Pacanowski–Philander and the Gent models, while the other one is a more recent model (Bennis et al. , 2010) proposed to prevent numerical instabilities generated by physically unstable configurations. The numerical schemes are based on the standard finite element method. We perform some numerical tests for relatively large deviations of realistic initial conditions provided by the Tropical Atmosphere Ocean (TAO) array. These initial conditions correspond to states close to mixing-layer profiles, measured on the Equatorial Pacific region called the West-Pacific Warm Pool. We conclude that mixing-layer profiles could be considered as kinds of “absorbing configurations” in finite time that asymptotically evolve to steady states under the application of negative surface energy fluxes. Introduction The mixing layer is located immediately below the ocean surface, and its formation is due to atmospheric-oceanic factors of exchange driven by the stress induced by the winds that generates a strong turbulent mixing dominated by vertical fluxes.The dynamics of mixing layers play an important role in the global oceanic circulation and global climate changes.Indeed, the depth of the mixed layer, which is the upper homogeneous part of the mixing layer with almost constant density, is very important for determining the sea surface temperature (SST) range in oceanic and coastal areas.In addition, the heat stored within the oceanic mixed layer provides a source of heat that drives global variability such as El Niño.The mixed layer also has a deep impact on the evolution of polar ice (Goosse et al., 1999), and it is closely related to different aspects of oceanic bio-systems too.The bottom of the mixed layer corresponds to the top of the pycnocline, a zone of high gradients of density (see Vialard and Delecluse, 1998;Defant, 1936;Lewandowski, 1997 for a physical description of the structure of mixing layers). The Oceanic General Circulation Models (OGCM) include mixing-layer parameterizations in order to better take into account the influence of the atmosphere-ocean surface interactions (Burchard et al., 2005;Wang et al., 2008).Indeed, they incorporate specific turbulence models for mixing layers, and within these the algebraic ones, which parametrize the turbulent viscosity and diffusion by means of algebraic expressions in terms of the gradient Richardson number.The Richardson number represents the balance between stabilizing buoyancy forces and destabilizing shearing forces.These kinds of models were introduced in the 1980s (Pacanowski and Philander, 1981), and they apply to stratified shear flows that are assumed to have reached a vertical equilibrium after the vertical mixing generated by the wind stress has been restabilized by buoyancy forces.The model proposed by Pacanowski and Philander (1981) was modified in several ways in order to obtain a better fitting with experimental data (Gent, 1991).Another kind of improvement was based upon the parameterization of the vertical profile of turbulent kinetic energy (KPP models; Large et al., 1994).In all these models, only vertical eddy diffusion effects are included.More complex and sophisticated parameterizations Published by Copernicus Publications on behalf of the European Geosciences Union & the American Geophysical Union. of the vertical turbulent mixing, such as the k−ε model, have also been developed (Mellor and Yamada, 1982;Gaspar et al., 1990) and used in the context of OGCM. The mathematical and numerical analysis of these models technically faces hard difficulties, and to the knowledge of the authors, there are no references addressing this analysis in the last cited models at the present date.In this paper we will focus on the classical algebraic turbulent mixing-layer models of Pacanowski and Philander (1981) and Gent (1991), and on a more recent algebraic turbulent mixing-layer model proposed in Bennis et al. (2010).Indeed, for these three models there exists a mathematical study performed by the authors of the present paper, especially focused on the non-linear asymptotic stability analysis under small perturbations, that applies to negative heat surface fluxes (Chacón-Rebollo et al., 2013a, b).In this paper we extend this study with the numerical investigation of the behaviour of the aforementioned models for relatively large deviations from initial conditions with respect to mixing-layer profiles.The applied initial perturbations admit meaningful physical interpretations: on the one hand, we take into account strong heating or precipitation phenomena at the surface, while on the other hand we consider the cooling or evaporation of surface water.These cases have not been analysed in the above-referenced papers, which deal with initial conditions close to mixing-layer profiles.In addition to studying the behaviour of these models for characteristic times of formation of well-developed surface turbulent mixing layers, we also perform an investigation of their asymptotic behaviour.We conclude that mixinglayer configurations attract the perturbed initial conditions in time scales of the order of several days, in agreement with the physics of the problem, and asymptotically evolve into the theoretical equilibria in time scales of the order of several months. Setting of algebraic closure models for oceanic turbulent mixing layers Let z ∈ [−h, 0] be the vertical spatial variable, where h > 0 denotes the thickness of the studied flow that must contain the mixing layer, and t ≥ 0 be the time variable.The mixing layer is assumed to be strongly dominated by vertical fluxes, so that the velocity and density of the fluid are assumed to be horizontally homogeneous.The flow is turbulent and well mixed, so the vertical velocity vanishes.The model affects the statistical mean horizontal velocity (u, v) and the statistical mean density ρ as functions of the variables z and t.In the ocean, the density is a function of the temperature and salinity through a state law, so it is considered as a thermodynamic variable.The Coriolis force is neglected, which is a valid approximation in tropical seas.The variables u, v, ρ satisfy the one-dimensional Reynolds-averaged equations: where ν 1 = a 1 + ν T 1 , ν 2 = a 2 + ν T 2 respectively are the total viscosity and diffusion.Here, a 1 , a 2 are the laminar viscosity and diffusion, and ν T 1 , ν T 2 are the vertical eddy viscosity and diffusivity coefficients.These are assumed to be functions of the gradient Richardson number R, defined as: where g is the gravity constant and ρ r is a reference density for the sea water. The eddy coefficients corresponding to the Pacanowski-Philander (PP) model are given by: where: with a 1 = 10 −4 , b 1 = 10 −2 , a 2 = 10 −5 (units: m 2 s −1 ).The Gent model is just a variant of the PP model, designed to better fit experimental data, given by: with a 1 = 10 −4 , b 1 = 10 −1 , a 2 = 10 −5 (units: m 2 s −1 ).In general the PP and Gent models become numerically unstable if the initial conditions are physically unstable configurations, corresponding to R < 0 (Bennis et al., 2008).In Bennis et al. (2010), a model of the eddy diffusion that remains numerically stable for a large range of negative gradient Richardson numbers was introduced: with the same constants as the PP model. In formulas (3) to (5), the eddy coefficients ν T 1 and ν T 2 are defined as functions of the gradient Richardson number through the terms (1 + γ R) n appearing at the denominator.Hereafter, these three formulations will be denoted respectively by R213, R23 and R224, where the integer values are the exponents of (1 + γ R) in the definitions of ν T 1 and ν T 2 .The eddy coefficients defined by relations (3) to (5) all present a singularity for a negative value of the gradient Richardson number.In Fig. 1, we have plotted the curves ν 1 = f 1 (R) and ν 2 = f 2 (R) obtained with the different formulations.In Eqs. ( 3) and ( 4), the diffusivity coefficient ν 2 becomes negative for negative values of R, and therefore these models are no longer valid. We shall consider the following initial and boundary conditions for problem (1): The Neumann boundary conditions at z = 0 represent the fluxes at the sea surface that model the forcing by the atmosphere.In particular: Q u , Q v are the surface momentum fluxes and Q ρ represents thermodynamic fluxes.The momentum fluxes are given by , where ρ a is the air density, and V u , V v respectively are the stresses exerted by the zonal and the meridional winds: with U a = (u a , v a ) the air velocity at the atmospheric boundary layer, and C D (= 1.2×10 −3 ) a friction coefficient (Kowalik and Murty, 1993). Note that models (1)-( 6) are not expected to describe all the interaction phenomena occurring in the mixing layer.Its purpose is mainly to give a better understanding of algebraic closure models for oceanic turbulent mixing layers.Therefore, we shall use simplified equations governing the variables u (zonal velocity), v (meridional velocity) and ρ (density).In practice, mixing-layer models are coupled with 3-D models of global oceanic circulation that yield the boundary values for velocity and density at the bottom of the layer.This allows one to use finer grids (in vertical direction) to compute the heat exchange through the ocean surface (Wang et al., 2008).The coupling of 1-D mixing-layer models to 3-D OGCM for the inner oceanic flows means that the 1-D model may be affected by multidimensional perturbations.In Chacón-Rebollo et al. (2013b), the authors tested whether, in tropical seas, replacing a multidimensional model with a 1-D mixing-layer model provides accurate results. Equilibria states of a perturbed model We show in this section that there exist steady solutions to problems (1)-( 6) for perturbed data.These perturbations may correspond to errors in the experimental measurements, roundoff computational errors, errors in the boundary data coming from the approximate solution of the 3-D global model, etc.The steady solutions correspond to an equilibrium between destabilizing wind shear effects and stabilizing cooling surface heat fluxes. Let us consider the perturbed model: with the perturbed boundary and initial conditions: The existence and uniqueness of smooth equilibria for this problem are stated in Chacón-Rebollo et al. (2013b), and are given by the following: Theorem 1 Assume that for any z ∈ I the implicit algebraic equation: where G(z) is the function defined by: admits at least a solution R e .Then, to each solution R e there exists a unique associated equilibrium solution of problem ( 7)-( 8) in [H 1 (I )] 3 , given by: The existence of solutions of the algebraic equation ( 9) is ensured under the following: and for some λ ∈ (0, 1) and i = u, v, ρ it holds: The assumption Q u > 0, Q v > 0 means that the wind velocity acts as a destabilizing agent for the mixing layer flow, while Q ρ < 0 means that the surface heat flux plays a stabilizing role.We conclude that for all considered models, there exist steady solutions to problems (1)-( 6) for perturbed data given by Eq. ( 10). It can be proved that under Hypothesis 1 there exists a unique gradient Richardson number R e for Bennis et al. Remark 1 The equilibria of unperturbed model ( 1) are studied in Bennis et al. (2010).In that case, R e does not depend on z, and, as a consequence, the equilibrium profiles for velocity and density are linear.The equilibria for perturbed models ( 7)-( 8) provided by Theorem 1 converge to those of the unperturbed model as the perturbations in the data vanish. Numerical tests and results In this section, we study the application of oceanic turbulent mixing-layer models in the equatorial Pacific region, called the West-Pacific Warm Pool.The West-Pacific Warm Pool is an area located at the Equator between 120 • E and 180 • E, where the temperature is high and almost constant along the year (oscillating between 28 and 30 • C).The precipitations are intense and, as a consequence, the salinity is low.In particular, we are interested in perturbing real initial data taken from the TAO array (McPhaden, 1995).These data correspond to profiles close to mixing-layer configurations that have already been analysed by the authors of the present paper (Chacón-Rebollo et al., 2013b;Rubino, 2011).In these previous studies, we have concluded that by starting from initial conditions close to mixing-layer profiles, we reach a well-developed surface turbulent mixing layer within two days, and mathematical stable equilibria within two months approximately.Here, we are mainly interested in analysing whether the formation of a homogeneous mixing layer first, and then of theoretical equilibria, are reachable even starting from initial conditions far away either from mixing-layer profiles and steady-state solutions, within the respective characteristic times. To numerically solve systems ( 7)-( 8), we discretize the initial boundary value problem by linear piecewise finite elements.To describe the numerical scheme, assume that the interval I = [−h, 0] is divided into N subintervals of length z = h/N, with nodes z i = −h + i z, i = 0, . . ., N , and construct the finite element space: To discretize the equation for u, for instance, we implement the semi-implicit method: where: and we consider similar discretizations for v and ρ.This discretization, under certain hypotheses, is stable and verifies a maximum principle (Bennis et al., 2010).In Chacón-Rebollo et al. (2013b), we developed a specific numerical analysis for the discrete equilibria of systems ( 7)-( 8), with the actual viscosities corresponding to the PP, Gent and Bennis et al. models, proving the existence, uniqueness and convergence to continuous equilibria.Also, in Chacón-Rebollo et al. (2013a), we proved the non-linear stability of these equilibria, with slight initial perturbations of the steady solutions. Here, we present two tests.In Test 1, we apply the numerical schemes ( 11)-( 12) with a large negative deviation from realistic density initial conditions in the absence of convection (i.e.∂ z ρ 0 < 0 for any z ∈ (−h, 0)).The perturbation applied in this case intends to simulate strong initial heating or precipitation processes at the surface.In Test 2, we consider an initialization of the code by a large positive surface deviation from a real density profile in the presence of convection (i.e.∂ z ρ 0 > 0 for some z ∈ (−h, 0)).The perturbation applied in this case simulates strong initial physical processes such as the cooling of the surface water or evaporation.All the numerical experiments are grid size-and time step-independent, in the sense that the results remain practically unchanged as z and t decrease. The initial and boundary data We perturb initial data available from the TAO array.The TAO project aims at studying the exchange between tropical oceans and the atmosphere, providing data often used in many numerical simulations.In particular, the velocity data come from the acoustic Doppler current profiler (ADCP) measurements.To obtain the actual profiles, we approximate the data measured at 0 • N, 165 • E by a linear interpolation, and we largely perturb the density at the surface, leaving the velocity profiles unchanged.We consider a negative buoyancy flux acting on the sea surface equal to −10 −6 kg m −2 s −1 in all cases, so that the only turbulence source is the wind stress.This negative surface heat flux is in agreement with Gent (1991).The studied layer depth is 100 m. Test 1 In this test, we consider a perturbation of the initial conditions corresponding to the TAO data for the 15 June 1991. The initial profiles are displayed in Fig. 2. The initial zonal velocity presents a westward current at the surface and, below it, an eastward undercurrent whose maximum is located at about −55 m.Deepest, we observe a westward undercurrent.The initial meridional velocity presents a southward current whose maximum is located at about −90 m.The initial density profile presents a strong negative deviation from the original data at the surface (first 20 m), in order to move the density profile away from a (homogeneous) mixing-layer configuration.Physically, we are in the absence of convection, since the initial density profile has a negative derivative along the water column, and by the perturbation we are simulating an important initial increase in heating or precipitation phenomena.At the surface, we impose a zonal wind equal to 8.1 m s −1 (eastward wind) and a meridional wind equal to 2.1 m s −1 (northward wind). The formation of a well-developed mixing layer is achieved by integrating the various models for time t = 192 h (8 days).The grid spacing is z = 5 m and the time step is t = 60 s.The corresponding numerical results are displayed in Fig. 3.We consider hereafter a standard definition of mixed layers (Thomson and Fine, 2003;Peters et al., 1988) that states that the base of the mixed layer is the depth at which the density changes by 0.01 kg m −3 .The final density profile displays a similar mixed layer for the R213, R23 and R224 models of about 20 m, in agreement with the observations reported by Boyer Montégut et al. ( 2004).Furthermore, the pycnocline simulated by the three models is similar.In the upper oceanic layer, the surface currents for the R213 and R224 models show almost the same behaviour, while the R23 model underestimates these currents.We notice an increase in the zonal and meridional surface currents in comparison with the initial profiles, which is in agreement with the application of a northeasterly wind.The surface current behaviour can be explained by the final viscosity and diffusivity values, displayed in Fig. 4 for all models, where in particular we observe that the R23 model produces the strongest viscosity and diffusivity. To obtain the steady states of the flow, we integrate the various models for t = 10 000 h (about 14 months) with t = 1 h.In Fig. 5, we can notice the formation of linear steady profiles for velocity and density, conforming to the theoretical expectations (Bennis et al., 2010).Similar numerical results are obtained for the R213 and R224 models, while the R23 model diverges from these results in all the water columns.In Fig. 6, we can observe a monotonic numerical convergence to the steady states for all models, the R23 model being the one that reaches a stable equilibrium first (after about 3000 h, i.e. 4 months approximately).Here, we compute the residual values as: and we refer to a stable equilibrium when r n < 10 −6 .Collecting data at time t = 10 000 h implies the consideration of a subsequent relaxation time until r n ∼ 10 −12 is obtained for all models.This shows the non-linear stability of equilibrium solutions that act as point-wise attractors whenever we apply a negative buoyancy flux at the surface, as in this case.In particular, the steady states attract configurations corresponding to mixing layers, which appear as intermediate transient states reached by the asymptotic evolution of the flow to mathematical equilibria. Test 2 The code is initialized with a perturbation of the 15 November 1990 data (see Fig. 7).The initial zonal velocity profile displays an eastward current all along the water column, whose maximum is located at the sea surface.The initial meridional velocity displays a southward current whose maximum is located at about 40 m of depth.The initial density profile displays a strong positive deviation from the original data at the surface (first 20 m), in order to move the density profile away from a (non-homogeneous) mixing-layer configuration, and to create a deep (first 50 m) significant surface zone of negative initial gradient Richardson numbers.We are thus in the presence of convection, where convection phenomena are in particular linked to physical processes such as the cooling of surface water or evaporation.Note that the presence of static instability zones in the density profiles can be detected with a certain frequency in the TAO data.In Fig. 8, we display the initial gradient Richardson number, which is effectively negative for the first 50 m of depth.This fact is reflected in the initial diffusivities of the various models as shown in Fig. 9, where in particular we observe negative values for models R213 and R23, while for model R224 we always have positive values.Physically, negative diffusivity cannot exist, so we cannot use R213 and R23 models in their original formulations here (see Eqs. 3 and 4), described in Pacanowski and Philander (1981) and Gent (1991).In practice, ocean modellers bypass this problem by extending the eddy diffusivities to those regions with positive constant values.An example is given by a modified Pacanowski-Philander model, developed in Timmermann and Beckmann (2004), which gives good results in the presence of convection.It consists of adding a constant value of 10 −2 m 2 s −1 to the turbulent diffusion (and viscosity), depending on depth.In particular, the modified expressions depend on the Monin-Obukhov length that characterizes the oceanic surface boundary layer where the wind effects are strong, and it is computed from a diagnostic equation given by Lemke (1987).Timmermann and Beckmann (2004) show that the modified model gives good results in the Weddell Sea, an Antarctic geographical region where important convective phenomena occur, but it needs the computation of supplementary parameters.The R224 model presents the advantage that it can be used without needing to compute any additional parameter, which is an important feature in situations where we have to simulate different turbulent regimes.We thus only use the R224 model for this test.The large value displayed by the initial R224 diffusivity corresponds to a negative gradient Richardson number near its singularity, that is at R = −0.2. The formation of a homogeneous mixing layer is achieved by integrating the R224 model for t = 48 h (2 days).The grid spacing is z = 5 m and the time step is t = 60 s.We impose a zonal wind equal to 7.3 m s −1 (eastward wind) and a meridional wind equal to 2.1 m s −1 (northward wind) at the surface.The results are displayed in Fig. 10.We observe that the application of a negative buoyancy flux at the surface permits us to stabilize the water column, producing a 70 m deep homogeneous mixed layer.We notice an increase in the zonal and meridional surface currents, which is in agreement with the application of a northeasterly wind.During the computation, the model is able to automatically adjust the diffusivity, reducing the initial peak until converging to a range between 10 −4 m 2 s −1 and 10 −2 m 2 s −1 , which is in agreement with Osborn and Cox (1972).By estimating the diffusivity with measurements of very small scale vertical structures, they have shown that it fits in the range [10 −6 m 2 s −1 ; 10 −1 m 2 s −1 ] in the studied region, i.e. the West Pacific Warm Pool. To obtain the steady states of the flow, we propose to integrate model R224 for t = 10 000 h (about 14 months), with t = 1 h.As the equilibrium solutions are unique in the case of model R224, we can compute the theoretical solutions and compare them with the numerical ones.In Fig. 11, we observe that we obtain practically identical linear profiles for the theoretical and the numerical solutions.In Fig. 12, the residual values display a good monotonic numerical convergence to the steady states.Thus, although physically stable equilibria correspond to positive gradient Richardson numbers, even initial conditions that present strong and deep vertical instabilities could asymptotically converge to those equilibria, i.e. these initial conditions too are in the range of attraction. Conclusions In the numerical experiments, we have observed the formation of mixing layers as well as the reaching of equilibrium profiles by starting from physically reasonable initial conditions far away from these two different configurations, remaining in the respective characteristic times of formation.We have noticed an increase in the zonal surface current by applying an eastward wind at the surface (u a > 0).At the same time, an efficient northward wind at the surface (v a > 0) causes an increase in the meridional surface current.These results are in agreement with the physical reality.Furthermore, looking at the residual values, we observe a correct convergence to steady states, corresponding to rather high levels of turbulent diffusions due to the dissipative nature of the equations of the models.We have also noticed that the convergence to steady states takes time of the order of several months.This time scale is much larger than that needed to generate a typical homogeneous mixed layer, which is of the order of a few days, by applying steady boundary data.This may explain why these equilibria profiles are not found in real surface layers, as the boundary data usually change in time scales of the order of hours.Even the formation of a homogeneous mixed layer is not possible if these data change too fast.From a numerical point of view, the investigation of the asymptotic behaviour of algebraic turbulence models for oceanic mixing layers has been useful on the one hand for showing the effective excellent stability properties of these schemes for negative heat fluxes, and, on the other, for proving that the equilibria states behave as pointwise attractors for intermediate transient mixing-layer configurations, for which the analysed models provide accurate physical predictions.In their turn, mixing-layer profiles act as "absorbing configurations" with characteristic times of a few days. where D u , D v and D ρ are functions of z, and σ u , σ v , σ ρ , U b , V b and R b are constants.Let us denote I = (−h, 0), and define the functions ( 2010) model (5) only(Chacón-Rebollo et al., 2013b), which could be interpreted as the intersection of the curves h z
6,013
2013-11-06T00:00:00.000
[ "Environmental Science", "Physics", "Mathematics" ]
Salt Solubility Products of Diprenorphine Hydrochloride , Codeine and Lidocaine Hydrochlorides and Phosphates – Novel Method of Data Analysis Not Dependent on Explicit Solubility Equations A novel general approach was described to address many of the challenges of salt solubility determination of drug substances, with data processing and refinement of equilibrium constants encoded in the computer program pDISOLX TM . The new approach was illustrated by the determinations of the solubility products of diprenorphine hydrochloride, codeine hydrochloride and phosphate, lidocaine hydrochloride and phosphate at 25 o C, using a recently-optimized saturation shake-flask protocol. The effects of different buffers (Britton-Robinson universal and Sörensen phosphate) were compared. Lidocaine precipitates were characterized by X-ray powder diffraction (XRPD) and polarization light microscopy. The ionic strength in the studied systems ranged from 0.25 to 4.3 M. Codeine (and possibly diprenorphine) chloride were less soluble than the phosphates for pH > 2. The reverse trend was evident with lidocaine. Diprenorphine saturated solutions showed departure from the predictions of the Henderson-Hasselbalch equation in alkaline (pH > 9) solutions, consistent with the formation of a mixed-charge anionic dimer. Introduction Salt selection in preformulation is an important step in the preparation of effective oral drug formulations [1][2][3][4][5][6][7][8].Salt solubility, being a conditional constant, takes on different values according to the concentrations and types of reactants used in a particular study.For this reason, laboratory-to-laboratory comparisons can be complicated, possibly leading to conflicting interpretations of in vitro dissolution studies in formulation development.On the other hand, salt solubility products are true equilibrium constants.But these are not often reported in published salt solubility studies.Interpretation and scaling of salt solubility measurements can be very challenging for a number of reasons. x Since drug salts are often much more soluble than the corresponding uncharged forms, salt solubility measurement is usually carried out in relatively concentrated solutions, with ionic strengths, I, often exceeding 1 M. x At these high levels, activity coefficients of ions are poorly controlled and cannot be accurately predicted by the traditional Debye-Hückel equation. x Also, pH electrodes calibrated in buffers with I = 0.15 M ("physiological" level) may not be accurate at much higher ionic strengths, especially in the extreme regions of pH (< 1 or > 12) where electrode junction potentials may be very different from those characteristic of the physiological level. x The salt solubility is a conditional constant, which depends not only on the concentration of the drug but also on that of the counterion with which the charged drug precipitates.The counterion may originate from the buffer used or other unsuspecting solution additives. x Under such high sample concentrations, many drugs, especially those likely to be surface-active, can form micelles, self-associated aggregates (dimers, trimers, or higher-order oligomers), or complexes with buffer species or other solution additives. All these effects can complicate the interpretation of the solubility data [9][10][11][12][13][14][15][16][17][18].Such complexity may not be evident unless a full solubility -pH profile is measured, over a pH range containing the uncharged and charged forms of the drug, preferably at more than one solid-sample excess.Carefully planned experimental designs and complicated computations are often needed to correctly interpret the measured salt solubility. Although explicit solubility equations (cf., Appendix A) have been derived for many different cases of salt solubility and aggregation [19], it is hardly practical to derive such equations for the vast number of possible forms of salt and aggregation stoichiometry that can be encountered.For this reason, salt solubility analysis of data in the past had been done on a case by case basis, sometimes using incomplete explicit solubility equations.At times the impact of aggregation reactions had been recognized but not dealt with quantitatively, presumably because computational methods were not available at the time [9,10,16]. In this study we describe a novel general approach to address many of the challenges of salt solubility determination, with data processing and refinement of equilibrium constants encoded in the computer program pDISOL-X TM (in-ADME Research).The new approach was illustrated by the determinations of the solubility products of three model drugs: diprenorphine hydrochloride, codeine hydrochloride and phosphate, lidocaine hydrochloride and phosphate.Diprenorphine, whose solubility-pH profile had not been reported, is primarily used as an opioid antagonist to reverse the effects of etorphine and carfentanyl.Codeine is a naturally-occurring analgesic opiate.Lidocaine is a local anesthetic.The effects of different buffers (Britton-Robinson universal and Sörensen phosphate) were compared.Lidocaine precipitates were characterized by X-ray powder diffraction and polarization light microscopy.The ionic strength in the studied systems ranged from 0.25 to 4.3 M. Chemicals Codeine hydrochloride, codeine phosphate, lidocaine free base and lidocaine hydrochloride were purchased from Sigma and used without further purification.Diprenorphine hydrochloride was an in-house synthesized compound and was of analytical grade. Potassium hydroxide and hydrochloric acid used for pK a determination were from Sigma.The KOH titrant solution was standardized by titration against primary standards potassium hydrogen phthalate dissolved in 15 mL of 0.15 M potassium chloride.Potassium hydrogen phthalate and potassium chloride were of analytical grade and were purchased from Sigma.The HCl titrant solution was standardized by titrating a measured volume against the standardized KOH. Two buffer solutions (Britton-Robinson universal and Sörensen phosphate) were used at pH 5 in the shake-flask experiments of codeine and lidocaine.Britton-Robinson buffer solution (mixture of acetic, phosphoric and boric acids, each at 0.04 M) was treated with 0.2 M NaOH to give the required pH.The Sörensen phosphate buffer solutions were prepared by mixing 0.067 M Na 2 HPO 4 and 0.067 M KH 2 PO 4 solutions to reach the required pH (4.8 -8.5).Phosphate-containing solutions between pH 1 and 12.5 were used for the equilibrium solubility measurements of diprenorphine, where 0.5 M KOH, 0.5 M or 14.85 M H 3 PO 4 (standardized 85%) solutions were used to reach the desired pH values.Distilled water of Ph.Eur.grade was used.All other reagents were of analytical grade. pH Electrode Standardization and Compensation for Large Changes in Ionic Strength All of the equilibrium constants reported here are based on the concentration scale, i.e., the "constant ionic medium" thermodynamic standard state [19].Since the measured pH is based on the operational activity scale, these values need to be converted to the concentration scale, p c H (=-log[H + ]).The procedure to calibrate and standardize the pH electrode is described in detail elsewhere [19].Briefly, the electrode is standardized by the blank titration method: to a 0.15 M KCl solution, enough standardized 0.5 M HCl is added to lower the pH to 1.8 (0.87 ± 0.03 mL when using a 20 mL solution); then the acidified solution is precisely titrated with standardized 0.5 M KOH up to about pH 12.2 (which consumes about 2 mL 0.5 M KOH).The blank titration pH data are fit to a four-parameter equation [20]: where K w is the ionization constant of water.The j H term corrects pH readings for the nonlinear pH response due to liquid junction and asymmetry potentials in highly acidic solutions (pH < 2), while the j OH term corrects for high-pH nonlinear effects [19].Typical values of the adjustable parameters at 25qC and 0.15 M ionic strength are α = 0.09, k S = 1.002, j H = 0.5 and j OH = -0.5.However, each electrode indicates its own characteristic set. Since salt solubility measurements can be under conditions where ionic strength may reach values as high as 1 M, the experimentally-determined parameters in Eq. (1), based on the "reference" ionic strength 0.15 M, are automatically compensated by the computational procedure in pDISOL-X, for changes in ionic strength from the reference value 0.15 M to the actual values in a particular solubility assay, according to the empirically-determined relationships defined elsewhere [19]. pK a Determination The pK a values of diprenorphine were determined by potentiometry using a GLpKa instrument (Sirius, Forest Row, UK) equipped with a combination Ag/AgCl pH electrode.The titrations were carried out at constant ionic strength (I = 0.15 M KCl) and temperature (T = 25.0 ± 0.5 °C), and under nitrogen atmosphere.Aqueous solutions of diprenorphine hydrochloride (10 mL, 0.7-0.8mM) were pre-acidified to pH 2 with 0.5 M HCl, and then titrated with 0.5 M KOH to pH 12. Three parallel measurements were carried out.The pK a values were calculated by the RefinementPro TM software (Sirius, Forest Row, UK). Solubility Measurements by Saturation Shake-flask Method In the shake-flask experiments, the pH of the solutions was measured by a Radiometer PHM 220 pH meter with combined Ag/AgCl glass electrode.The temperature of the samples was maintained at 25.0 ± 0.5 °C during the solubility measurements using a Lauda thermostat.A Heidolph MR 1000 magnetic stirrer was used to mix the two phases.The concentration in the supernatant was measured by spectroscopy using a Jasco V-550 UV/VIS spectrophotometer. To facilitate the measure of concentration by UV spectrophotometry, the specific absorbance (A 1cm 1% , the absorbance of 1 g/100 mL solution over a 1 cm optical path length at a given wavelength) of each sample at the given pH values was determined separately at a selected wavelength using 12-18 points of a minimum of two parallel dilution series, from the linear regression equation (Lambert-Beer law). The equilibrium solubility of the samples at different pH values was determined by the new protocol of saturation shake-flask method [21].The sample was added to the aqueous Britton-Robinson universal and Sörensen phosphate buffers (codeine and lidocaine) or phosphate-containing solution (diprenorphine) until a heterogeneous system (solid sample and liquid) was obtained.The solution containing solid excess of the sample was stirred for a period of 6 hours (saturation time) at controlled temperature allowing it to achieve thermodynamic equilibrium.After a further 18 hours of sedimentation, the concentration of the saturated solution was measured by UV spectroscopy.Three aliquots were taken out with a fine pipette (5-500 μL) from the liquid and diluted with the solvent if necessary.At least three parallel concentration measurements were carried out and the result was calculated using 9-12 data points.The standard deviation varied from 1-13%. Refinement of Intrinsic and Salt Solubility and Aggregation Constants The new data analysis method uses logS -pH as measured input data (along with the standard deviations in logS) into the computer program.The data can be UV-derived, potentiometric-derived, etc.An algorithm was developed which considers the contributions of all species present in solution, including universal buffer components (e.g., Britton-Robinson, Sörensen, Prideaux-Ward, etc.).The approach does not depend on any explicitly derived extensions of the Henderson-Hasselbalch equations.The computational algorithm derives its own implicit equations internally, given any practical number of equilibria and estimated constants, which are subsequently refined by weighted nonlinear leastsquares regression.So, in principal, drug-salt precipitates, -aggregates, -complexes, -bile salts, -surfactant can be accommodated [19].Specific buffer-drug formed species can be tested, as is often necessary with phosphate-, and sometimes, citrate-containing buffers.The program assumes an initial condition of a suspension of the solid drug in a solution often containing a background electrolyte, e.g., 0.15 M NaCl, ideally with the suspension saturated over a wide range of pH.The computer program calculates the distribution of species corresponding to a sequence of additions of standardized strong-acid titrant HCl (or weak-acid titrants H 3 PO 4 , H 2 SO 4 , acetic acid, maleic acid, lactic acid) to simulate the suspension speciation down to pH ~ 0, the staging point for the next operation.A sequence of perturbations with standardized NaOH (or KOH) is simulated, and solubility calculated at each point (in pH steps of 0.005-0.2),until pH ~ 13 is reached.The ionic strength is rigorously calculated at each step (cf, Appendix B), and pK a values (as well as solubility products, aggregation and complexation constants) are accordingly adjusted (cf., Appendix B).Nonlinear pH electrode standardization parameters (Eq. 1) are included in the calculation [20], a feature that is especially important for measurement of accurate pH in the extreme pH (<1 or >12) regions. At the end of the speciation simulation, the calculated logS vs. pH curve is compared to actual measured logS vs. pH.A logS-weighted nonlinear least squares refinement commences to refine the proposed equilibrium model, using analytical expressions for the differential equations. The process is repeated until the differences between calculated and measured logS values reach a minimum.Specifically, the logS -pH data are refined to minimize the weighted residual function, where N is the measured number of solubility values used to test the model, and logS i calc is the calculated log solubility values.The estimated standard deviation in the observed logS, V i , is estimated as 0.10 (log units), or is set equal to the values reported in the measurement.The overall quality of the refinement is assessed by the "goodness-of-fit" (GOF), where N p is the number of refined parameters.If a proposed model fits the data well, and accurate weighting factors are used, then GOF = 1 is the statistically expected value. Results and Discussion Table 1 summarizes some of the physicochemical properties of the three model compounds, including the pK a constants of diprenorphine determined in this work.Table 2 lists the shake-flask solubility measurement details, including the actual weights of compounds added to form the saturated solutions.In critical salt solubility studies, it is often necessary to state the actual weight of compound added, and not just state that "excess solid was added."This is because salt solubility constants are conditional, and in some cases require the actual weights to determine solubility products, especially when aggregates form in saturated solutions or when salt stoichiometries are complex.In Table 2, the calculated volumes of 14.85 M H 3 PO 4 or 0.5 M KOH titrants used to adjust the pH are normalized to 1 mL total solution volumes (actual volumes ranged from 0.5 to 15 mL).Table 3 lists the equilibrium constants determined and critical concentrations (including ionic strength and buffer capacity) calculated in the various assays. The pK a values of the model compounds are listed in Table 1 at standard state conditions (I ref = 0.15 M, 25 o C), but the primed constants in Table 3 are those that were actually applied at the specific ionic strength (0.25 -4.34 M), calculated as described in Appendix B. Those of diprenorphine changed slightly by 0.0 to +0.05 (pK a1 ) and 0.0 to -0.04 (pK a2 ); that of codeine +0.02 to +0.14; that of lidocaine +0.15 to +0.51.According to the simple Debye-Hückel theory, those of monoprotic bases are expected not to change, but Eq.B5 predicts otherwise.Also listed in Table 3 are the actual pK sp values used at the particular ionic strengths, compared to those at I ref .e [24] f [16].g [25] h [26] i [27] The solubility products are also affected significantly when I >> I ref . For example, the phosphate pK sp of codeine phosphate decreased from 1.00 {1.11} at I ref to -0.07 {0.02} at I = 1.21 {1.07} M, where the braced value refers to Britton-Robinson universal and the unbraced value refers to Sörensen buffer measurements (Table 3).Lidocaine showed similar large shifts due to the large I = 4.34 {3.81} M. The changes were much smaller for the codeine chloride pK sp , since I = 0.33 {0.24} M. The pK a values of the acetate, phosphate, and borate buffer components used were taken from Wiki-pK a website (<http://www.in-ADME.com/wiki_pka.php/>).These were automatically adjusted for changes in the ionic strength by pDISOL-X.The changes were most pronounced with phosphoric acid (0.17 -0.62), and were minimal with acetic and boric acids (0.01-0.06). Diprenorphine Hydrochloride Figure 1a shows the logS vs. pH profile of diprenorphine hydrochloride (XH 2 + Cl -).The dashed line is calculated with the Henderson-Hasselbalch equation (Eq.A5), using the two pK a of diprenorphine.The solid curve was calculated based on the equilibrium model most consistent with the actual logS measurements at the various pH.For pH > pK a GIBBS (5.07; cf., Appendix A), the precipitate is the uncharged ordinary ampholyte, showing the characteristic parabolic shape.The intrinsic solubility (molar units) was refined as pS 0 = 4.60 ± 0.08 (S 0 = 11.5 ± 0.4 µg mL -1 ). At pH below the Gibbs pK a , either the chloride or the phosphate salt precipitates (or possibly both).It is not easy to be certain which form precipitates unless several measurements are made at pH < 2, in order to exploit the common ion effect which would be evident for phosphate salt, since phosphoric acid was used to lower the pH.The solubility at pH 0.83 suggests that a phosphate precipitate may form at the very low pH.In contrast, the chloride model predicts that all solid dissolves at pH 0.83 (given the amount of sample added).The flat shape of the curve pH 2-5 is most consistent with chloride precipitate.A phosphate precipitate would be expected to show an upward curvature near the Gibbs pK a .The constant based on the assumed chloride precipitate is pK sp = 2.06 ± 0.07 (S XH = 44.6 ± 1.5 mg mL -1 , in hydrochloride equivalents). At pH > 9, the logS -pH curve shows a shift to lower pH, compared that what would be predicted from the Henderson-Hasselbalch equation.The consistent interpretation of the shift is that a water-soluble mixed-charge anionic dimer forms, with the stoichiometry XHX - [19].Similar species have been observed in numerous studies, and in some cases, LC/MS was able to corroborate the hypothesized aggregate formations [18].The equilibrium constant for the reaction XH + X -= XHX -is log K XHX = 5.21 ± 0.18 M -1 . Codeine Hydrochloride and Dihydrogenphosphate Figure 1b shows the codeine hydrochloride (BH + Cl -) and phosphate (BH + H 2 PO 4 -) logS -pH profiles.The curve was calculated assuming the intrinsic solubility, pS 0 = 1.52 (S 0 = 12 mg mL -1 , assuming phosphate salt molecular weight), reported by Kuhne et al. [24].Although single pH measurements were made, the whole curve adds useful perspective to the expected solubility -pH relationship.The assignment of the types of salts formed is possible in the way the assays were designed.The codeine phosphate assay had no source of chloride, and thus revealed the phosphate pK sp = 1.00 {1.11}.The codeine chloride assay indicated a significantly higher pK sp = 1.59 {1.57}, suggesting that the salt precipitate was that of the chloride.Furthermore, the solubility product for the phosphate salt in the case of the chloride was not exceeded (Table 3), suggesting the absence of phosphate salt in the chloride assay. Lidocaine Free Base and Hydrochloride Figure 1c shows the lidocaine phosphate (BH + H 2 PO 4 -) logS -pH profile.The chloride was very soluble with S BH >1 g mL -1 . The chloride pK sp in Table 3 refer to the minimum possible values: pK sp BHCl = -0.99{-0.85}, suggesting that the chloride salt is at least 50 times more soluble than the phosphate salt.The solid curve was calculated assuming the intrinsic solubility, pS 0 = 2.04 (S 0 = 2.5 mg mL -1 , assuming BH + Cl -molecular weight), reported by Bergström et al. [16].As in the case of the codeine assays, the assignment of the types of salts formed with lidocaine is possible in the way the assays were designed.The lidocaine free base assay had no source of chloride, and thus revealed the phosphate pK sp = 0.85 {0.88}.The solubility product for the phosphate salt in the case of the chloride was not exceeded (Table 3), suggesting the absence of phosphate salt in the chloride assay. The unfilled circle symbols in Figure 1c are the solubility values reported by Bergström et al. [16], using the miniaturized shake-flask method ( 25o C, 0.15 M phosphate buffer, 24 h incubation). Even though the pK sp values of phosphate precipitate in the free base case were nearly identical for the two buffer systems used, the morphology of the crystals isolated was quite different, as shown in Figure 2, with the Sörensen buffer producing much larger and better-formed crystals, compared to the Britton-Robinson buffer (approx.800 vs. 60 µm, respectively).The X-ray powder diffractograms in Figure 3 confirmed that the crystals formed in the free base assay were those of the same phosphate salt polymorph. Conclusion The design of salt solubility assays here and the data analysis capability of the new program, pDISOL-X offered an opportunity to critically investigate issues related to the challenges of characterizing salt solubility products, such as the high ionic strengths, which can affect values of equilibrium constants and the calibration of pH electrodes.The "anomalies" in the shapes of logS -pH profiles that cannot accurately be predicted by the Henderson-Hasselbalch may be common with sparingly-soluble or practically-insoluble drugs, such as diprenorphine, but are not always easy to recognize unless accurately-determined pK a values are available [28].Phosphate buffers can dramatically influence the solubility profiles of ionizable drugs, as shown here and elsewhere [16,29].These and other similar complications may be common, but are not always easy to interpret quantitatively.In such instances, pDISOL-X may be a helpful data analysis and simulation tool.It can further aid in the analysis of dissolution mechanisms which depend on the salt solubility of drugs. Gibbs pK a Values At the lower pH = pK a1 Gibbs , two solids co-precipitate: XH 2 + Cl -(s) and XH(s).At the higher pH = pK a2 Gibbs , two different solids co-precipitate: Na + X -(s) and XH(s).More discussion of the topic may be found in the literature [19,30,31]. Cationic, Anionic and Neutral Self-Aggregate Formations In concentrated drug solutions or in solutions containing practically insoluble drugs, oligomeric species, X n n-, (XH 2 + ) n n+ , (XH) n , (XH.X -) n n-, (XH.XH 2 + ) n n+ , …, can form.These can characteristically alter the shape of the solubility -pH profile.Consequently, most of Eqs.A1-A9 would need to be modified to accommodate aggregation equilibria.The resultant solubility equations often become very complex, and not all combinations of possible species have had the corresponding equations derived.Examples of such derivations of aggregation-solubility equations may be found in the literature [19]. B) Automatic Ionic Strength Compensation Generally, the ionic strength, I, changes in the course of an acid-base titration due to ionizations, additions of titrant, and dilution of concentrations.This change affects acid-base equilibrium constants.In solubility experiments designed to determine salt solubility products, K sp , the ionic strength can vary substantially during a titration, and sometimes reaches values as high as 1 M, or even higher.In contrast, ionization constants, pK a , are determined at a nearly constant I = 0.15 M, under conditions where low sample concentrations (e.g., 10 -3 to 10 -6 M) are "swamped" by the added inert salt (e.g., 0.15 M NaCl or KCl).The independently-determined pK a constants are critical to the analysis of solubility data, and thus the above large differences in ionic strength need to be factored in, as described here. It is a reasonable practice to designate 0.15 M as the "reference" ionic strength, I ref , ("physiological" level) to which all equilibrium constants are scaled in the solubility assay.In the older literature, the reference state was often chosen as zero, but in current pharmaceutical applications, 0.15 M is usually chosen, with no loss of thermodynamic rigor [19]. Since the ionic strength at any given pH point in a titration is likely different from I ref , all ionization constants need to be locally transformed (from reference I ref to local I) for the calculation of local point concentrations.The procedure below describes such an adjustment of activity coefficients. Consider a three-reactant system, based on reactants X, Y and H (proton), whose charges are Q x , Q y , and +1.The concentration of the j th particular species, C j , is defined in terms of these reactants where β j is the cumulative formation constant [19], and e xj , e yj , and e hj are the X, Y, and H stoichiometric coefficients, respectively, of the j th species. At each pH point in a solubility assay, I is calculated precisely, according to the general formula [32]: The last two terms in braces in Eq.B3 take into account any n X or n Y number of counter ions introduced to the solution by drug substances in salt form (per X-or Y-compound, respectively). The reference set of β(I ref ) formation constants are locally transformed to the set β(I) according to the general expression: The ionic-strength-dependent activity coefficients of X, Y, H, and j th species (cf., Eq.B2) are denoted f x , f y , f h , and f j , respectively.A similar equation was introduced by Avdeef [32], based on the Davies-modified Debye-Hückel equation, which is reasonably useful up to I = 0.3 M. Since much higher ionic strengths are reached in salt solubility experiments, Eq.B4 in the current study is cast in an expanded activity coefficient equation based on the hydration theory proposed by Stokes and Robinson [32,33], further elaborated by Bates et al. [34] and Robinson and Bates [35] to include single-ion activities, then slightly modified by Bockris and Reddy [36], and recently applied to solubility data of a drug-like molecule by Wang et al.The first term on the right side of Eq.B5 is the Debye-Hückel equation accounting for the ion-ion electrostatic interactions; the second term is related to the decrease in the activity of water due to the work done in immobilizing some of the bulk water to hydrate ions; the third term is related to free energy change of the ions, as their concentrations effectively increase when the volume of bulk water decreases upon hydration of the ions.The parameters at 25qC (molar scale): dielectric constant of water, ε = 78.3,76.8, 67.3 at I = 0, 0.15, and 1 M (NaCl), respectively [37]; the Debye-Hückel slope, A = 1.825x10 6 (εT) -3/2 = 0.512, 0.528, 0.642 at I = 0, 0.15, and 1 M, resp.; B = 50.29 (εT) -1/2 = 0.329, 0.333, 0.355 at I = 0, 0.15, 1 M, resp.; T is the absolute temperature (K).The two adjustable parameters are å i , corresponding to the mean diameter of the i th hydrated ion [38], and h i , the hydration number of the i th ion [37].The activity of water, a H2O = 1.000, 0.995, 0.967 at I = 0, 0.15, 1 M, respectively [36].C H2O = 55.51M (concentration of water in pure solvent form, I = 0).The summation symbols in Eq.B5 are over all charged species (including the reactants) in the system. Figure 1 . Figure 1.Solubility-pH profiles of the three model drugs studied.(See text) β(I) refers to the ionic strength I, while β(I ref ) refers to I ref .
6,123.6
2013-12-16T00:00:00.000
[ "Chemistry", "Medicine" ]
A Truncated Singular Value Decomposition Enhanced Nested Complex Source Beam Method This work presents a novel matrix compression algorithm to improve the computational efficiency of the nested complex source beam (NCSB) method. The algorithm is based on the application of the truncated singular value decomposition (TSVD) to the multilevel aggregation, translation, and disaggregation operations inNCSB. In our implementation, the aggregation/disaggregation matrices are solved by the truncated far-fieldmatching, which is based on the directional far-field radiation property of the complex source beams (CSBs). Furthermore, the translation matrices are obtained according to the beam width of CSBs. Due to the high directivity of the radiation patterns of CSBs, all the far-field related interaction matrices are low-ranked. Therefore, TSVD can be employed and a new set of equivalent sources can be constructed by a linear combination of the original CSBs. It is proved that the radiation power of the new sources is proportional to the square of the corresponding singular values. This provides a theoretical guideline to drop the insignificant singular vectors in the calculation. In doing so, the efficiency of the original NCSB method can be much improved while a reasonably good accuracy is maintained. Several numerical tests are conducted to validate the proposed method. Introduction Integral equation (IE) method has been intensively studied in the analysis of electromagnetic (EM) radiation and scattering in recent years.For a numerical solution, the IE is usually discretized into a matrix equation by the method of moments (MoM) [1].In order to conduct large-scale simulations, a variety of numerical techniques have been developed based on MoM.These techniques are carried out efficiently either by reduction of the far-field interaction exploiting the physical or mathematical properties of the IE or by using well-designed basis/test functions to reduce the total number of unknowns. In the first category, the multilevel fast multipole algorithm (MLFMA) [2] is widely used, where the far-field interactions can be accelerated by aggregation, translation, and disaggregation operations.Besides it, the low rank property of the MoM far-field submatrices leads to a series of matrix decomposition methods, such as the multilevel matrix decomposition algorithm (MLMDA) [3] and the adaptive cross approximation (ACA) [4,5].For the second category, the higher order basis functions (HOBF) [6], phase extracted basis functions (PE) [7,8], characteristic basis function (CBF) [9,10], or body of revolution (BoR) [11] MoM are proposed to reduce the number of unknowns in a given problem. Following the first category, a complex source beammethod of moments (CSB-MoM) is recently proposed to accelerate the far-field interactions of MoM [12].In this method, the object is divided into groups and complex source beams (CSBs) [13] are used to expand the fields of the basis functions residing in each group [14,15].Hence, the far-field interactions of these basis/test functions can be accounted for by their equivalent CSB expansions.Due to the directional nature of CSBs [16], the interactions usually involve only a small portion of the total CSBs.A multilevel version of this method is developed in [17] to further improve the computational efficiency.The branch cut issue of CSBs, which might degrade the accuracy in the evaluation of group interactions, can be avoided by a proper choice of the CSB parameters. Unfortunately, in this method, the CSB expansions at any level are calculated directly from the basis functions contained in the finest level.This operation is very computationally expensive.Hence, the application of this method to electrically large problems is prohibited.To overcome this difficulty, a nested complex source beam (NCSB) method is proposed by utilizing an equivalent relationship between adjacent levels [18].This relationship is built by treating CSBs in the child group as new sources and applying the far-field matching to get their CSB expansions in the parent group.In doing this, the computational complexity of NCSB can be reduced to ( log ), where is the number of unknowns.However, it should be noted that the CSBs involved in the translation process have to be determined empirically.Moreover, the aggregation and disaggregation matrices are still in a dense format. To fully exploit the directional property of CSBs, a truncated singular value decomposition (TSVD) method is applied to compress the aggregation, translation, and disaggregation matrices of NCSB in this paper.After SVD, a set of equivalent sources can be obtained by a linear combination of the original CSBs.A theoretical proof reveals that the radiation power of the new sources is proportional to the square of the corresponding singular values.This provides a guideline for the truncation in the calculation.Thereby, the proposed method not only leads to a significant improvement of the computational efficiency but also provides a flexible compromise between accuracy and computational cost. Formulations Given a 3D perfectly electrical conducting (PEC) body defined by its surface, a MoM matrix equation can be obtained as where I is the unknown vector containing the expansion coefficients of the current, V is the excitation vector, and Z is the MoM impedance matrix.After grouping, this dense matrix can be separated into the near-field interaction part Z and the far-field interaction counterpart Z . In CSB-MoM, the far-field interactions between different groups are carried out by a series of CSBs launched on a complex equivalence surface enclosing each group [14].The far-field part of the matrix-vector product (MVP) in CSB-MoM can be represented as where and denote the observation and source groups and indicates the finest level (single level in CSB-MoM). W 𝑝 󸀠 and Y are, respectively, the expansion matrix and local expansion matrix for both and components.The farfield matching technique can be used for constructing W and Y in electric field integral equation (EFIE), magnetic field integral equation (MFIE) [19], and PMCHWT integral equation [20].T , is the translation matrix, of which the elements are expressed as where p and p denote the unit vectors of th and th CSB, respectively.The complex position vectors r , and r , are the launch points of CSBs.(r | r ) is the dyadic Green function with complex arguments. By using this representation, the CSB expansion coefficients for the source group can be expanded from the surface currents as On the other hand, the CSB expansion coefficients R , for the receiving group are translated from the transmitting group : As has been mentioned previously, the directional property of CSBs can be used to reduce the computational cost in the translation procedure.The translation window can be set based on numerical experiments for different group sizes and CSB parameters, as is done in [18]. NCSB Formulations. Motivated by the idea of MLFMA [2], CSB-MoM can be naturally extended to a multilevel version if a similar octree data structure is adopted.However, the multilevel CSB or MLCSB proposed in [17] calculates the CSB expansion coefficients for each level directly from the basis functions in the finest level.The lack of a proper mechanism of interlevel aggregation and disaggregation prohibits this method from analyzing large-unknown problems. To end this, the NCSB is proposed in [18] to introduce the aggregation and disaggregation operations between every two adjacent levels through a proper far-field matching. In the following, we obtain the aggregation matrix in a slightly different way by further utilizing the directional property of the radiating far field of CSBs.Moreover, the symmetry is fully exploited so that only one aggregation matrix is needed for all the eight child groups of one parent group.First, a linear system is set up to build the equivalent relationship of CSBs between two adjacent levels: where Z is the matching matrix that connects the equivalent CSB sources with the far fields F +1, .The expression of the matching matrix is the same as the matrix used for CSB expansion with electric current type equivalent sources in CSB-MoM [12,14].Different from CSB-MoM of single level, the equivalent CSB sources in Z are launched in level , and the far fields here are radiated by CSBs in child level + 1 with both and components: where the elements in each block matrix are It is noted that, for a parent group in the octree, there are eight different child groups at most.Hence, eight corresponding aggregation matrices are required in the calculation.However, by fully exploiting the symmetry property, we discover that only one aggregation matrix is needed and others can be easily deduced from it.In the following, the subscript will be omitted for simplicity. In our method, the radiating far fields on the righthand-sides of ( 6) can be fast calculated by truncating CSB directional fields within the paraxial regions.Once Z and F +1,c are assembled, the aggregation matrix for level can be numerically solved.Here and +1 are the numbers of CSBs in levels and + 1.In this work, we apply the least square method to improve the stability of the solution.In (9), the element [ Ã 𝑙 ] +1 in the block aggregation matrix is the mapping coefficient between the +1 th CSB in the child group and the th CSB in its parent group.By using this relationship, the disaggregation matrix can be easily obtained from the transpose of the aggregation matrix.The CSB expansion coefficients of the parent group in level can be obtained efficiently from its child groups in level + 1 with the aggregation matrix: Similar to MLFMA, the CSB expansion coefficients of a receiving group in level + 1 is obtained from both of the translation in the same level and the disaggregation from its parent level .Therefore, we have As has been elaborated in the above, the NCSB algorithm can be constructed in much the same way as MLFMA.However, all the operation matrices involved in NCSB are dense matrices, in contrast with the diagonal matrices in MLFMA.Improvement can be made by utilizing the unique property of CSBs.Specifically, all the CSBs are directional; that is to say, a CSB can only directionally interact with another CSB in the far-field region.Hence, the dense aggregation/disaggregation and translation matrices are usually low-ranked.To take advantage of the directional property of CSBs, TSVD will be applied to compress these matrices in the following sections. TSVD in Aggregation/Disaggregation Process. For the aggregation matrix in (9), the SVD factorization can be applied as where The physical meaning of the above SVD can be interpreted as follows.One set of the child CSBs is coupled to another set of parent CSBs, with the coupling strength governed by the corresponding singular value.Hence, after SVD, the aggregation procedure is converted from beam-tobeam coupling into mode-to-mode coupling.Furthermore, we can prove that the radiated power by each child CSB mode is proportional to the square of the singular value (see Appendix).This power-related mode provides a helpful guideline for extracting the most significant part in the aggregation matrix. If is the effective rank of A for a prescribed threshold , the SVD decomposition in (12) can be truncated and approximated as where Σ is formed by the first (largest) singular values in Σ satisfying +1 < 1 .Ũ and Ṽ denote the corresponding submatrices consisting of the first columns of U and V , respectively.Hence, Ṽ contains all the child CSB modes having radiated power higher than 2 2 1 .According to the prescribed accuracy, this reduced set of CSBs in child group suffices to represent the far field of the parent group.Since disaggregation is implemented as the transpose of aggregation, the TSVD procedure of disaggregation matrix can be implemented by fully utilizing Ũ and Ṽ in (13). TSVD in Translation Process. As the CSB decays rapidly in the directions orthogonal to the beam propagation axis, in the translation process, the interactions between farfield groups are carried out by CSBs whose beam axis are alined with the line connecting the group centers.These CSBs can be easily selected by setting a conical truncation window.To avoid amount of experiments needed to determine the minimal conical angle, in this paper, we first set the truncation angle when the field of a CSB decays to −20 dB of the aforementioned prescribed accuracy times the maximum value.According to this criterion, the CSB selection matrices L , and L , are introduced to pick up where Ũ, , , Σ, , , and Ṽ, , are defined similarly as in (13), except that the threshold is set as 2 instead of to account for the effects from both of the transmitting and receiving groups. The proposed TSVD procedure is effective by providing new set of CSB modes based on the singular vectors.The significance in the total radiated power is governed by the descending singular values.The SVD process fully exploits the spatial window property of CSBs, which is independent of the geometry, current distribution, and types of IE.By defining an error threshold, redundant modes which contribute negligibly to the radiated power are excluded.Therefore, the aggregation, translation, and disaggregation matrices can be represented in a very compact form.Furthermore, the balance between accuracy and computational efficiency can be easily controlled by adjusting the truncation threshold. Numerical Results In this section, several numerical tests are conducted to demonstrate the efficiency and validity of the proposed method. Firstly, the low rank property of the aggregation matrices of NCSB method is studied.Figure 1 shows the normalized singular values of the aggregation matrices for level 2 (with electrical size of 8 and aggregating from level 3) and level 3 (with electrical size of 4 and aggregating from level 4).The numbers of CSBs for levels 2, 3, and 4 are 5738, 1758, and 682, as listed in Table 1.Considering both and components of CSBs, the dimensions of the aggregation matrices for level 2 and level 3 are 11476 × 3516 and 3516 × 1364, respectively.As shown in Figure 1, the singular values for both matrices decrease rapidly, indicating a quick decay of the radiated power from individual child CBS mode.Due to this fact, the aggregation matrices are truncated by thresholds of = 10 −2 , 10 −3 , and 10 −4 for comparison.Figure 2 depicts the relative errors of the radiated far field (an effective measure of the aggregation error) with respect to the observation angles.In particular, this relative error is defined as and here E() is the reference field, E agg () is the field obtained via aggregations by TSVD, and is the observation angle which is set on a azimuth circle.In Figure 2, the reference field is radiated directly by the CSBs of a group with the size of 2 in the fourth level.Then two aggregation steps with TSVD are performed to this group until the second level.Finally, the field E agg () is calculated by the CSBs in the second level.Figure 2 shows that the relative aggregation error can be effectively controlled by the TSVD threshold . Conclusion In this paper, we have proposed an efficient implementation of NCSB.Different from previous work, the aggregation process is first constructed by the truncated far-field matching. The dimension of the translation matrix is then reduced based on the beam width of the CSBs.Finally, TSVD is used to compress the aggregation, translation, and disaggregation matrices by fully exploiting the directional property of CSBs. It is shown that the radiated power of the new sources from SVD is proportional to the square of the corresponding singular values.This power-dependent mode provides a theoretical guideline for extracting the most significant CSB contributions in the calculation.Therefore, the desired balance between accuracy and computational efficiency can be easily controlled by adjusting the truncation threshold. Figure 1 : Figure 1: Normalized singular values of the aggregation matrix. Figure 5 : Figure 5: Bistatic RCS of a PEC sphere with a diameter 32, by NCSB-TSVD and Mie-series solution. [U ] 2 ×2 and [V ] 2 +1 ×2 +1 are unitary matrices and Σ = diag{ 1 , 2 , . . ., 2 +1 } 2 ×2 +1 is a diagonal matrix whose elements are the nonnegative real singular values listed in a descending order.The columns of V represent a new set of orthonormal child CSB modes in level + 1, which are obtained as linear combinations of the original CSBs.Matrix U defines a complete set of orthonormal parent CSB modes associated with the child modes.Diagonal matrix Σ maps the child modes to the corresponding parent modes with weighting of the singular values. Table 1 : Computational statistics.×= [L , ] × [ T , ] × [L , ] × , , ) is 1,if the corresponding CSB is within the conical truncation window; otherwise they are set to 0.After this truncation based on the beam width of CSBs, the dimension of the translation matrix T , is reduced from × to × .However, it is still rank-deficient.The TSVD procedure is then applied to this reduced translation matrix to gain further compression: ,
3,843
2017-08-01T00:00:00.000
[ "Engineering", "Physics" ]
Adenylate cyclase 5 coordinates the action of ADP, P2Y1, P2Y13 and ATP-gated P2X7 receptors on axonal elongation In adult brains, ionotropic or metabotropic purinergic receptors are widely expressed in neurons and glial cells. They play an essential role in inflammation and neurotransmission in response to purines secreted to the extracellular medium. Recent studies have demonstrated a role for purinergic receptors in proliferation and differentiation of neural stem cells although little is known about their role in regulating the initial neuronal development and axon elongation. The objective of our study was to investigate the role of some different types of purinergic receptors, P2Y1, P2Y13 and P2X7, which are activated by ADP or ATP. To study the role and crosstalk of P2Y1, P2Y13 and P2X7 purinergic receptors in axonal elongation, we treated neurons with specific agonists and antagonists, and we nucleofected neurons with expression or shRNA plasmids. ADP and P2Y1–GFP expression improved axonal elongation; conversely, P2Y13 and ATP-gated P2X7 receptors halted axonal elongation. Signaling through each of these receptor types was coordinated by adenylate cyclase 5. In neurons nucleofected with a cAMP FRET biosensor (ICUE3), addition of ADP or Blue Brilliant G, a P2X7 antagonist, increased cAMP levels in the distal region of the axon. Adenylate cyclase 5 inhibition or suppression impaired these cAMP increments. In conclusion, our results demonstrate a crosstalk between two metabotropic and one ionotropic purinergic receptor that regulates cAMP levels through adenylate cyclase 5 and modulates axonal elongation triggered by neurotropic factors and the PI3K–Akt–GSK3 pathway. Introduction Most early studies of the roles of nucleotides in development have examined their intracellular roles. However, it is now generally accepted that purines and pyrimidines have potent extracellular actions mediated by the activation of specific membrane receptors. The role of purinergic signaling during developmental and pathological states is only now beginning to be explored because of the large number of purinergic receptor subtypes involved and because purines have effects on every major cell type present in the CNS. ATP is released by different cell types in response to multiple physiological stimuli, as well as after programmed or injury-induced cell death, and can be converted into other purines, such as ADP, AMP or adenosine by the action of extracellular ectonucleotidases (Burnstock, 2007). ATP, ADP or adenosine can activate purinergic receptors, which are subdivided in P1 and P2 receptors. The P2 receptor family consists of cationic ATPoperated P2X receptors, and the metabotropic G-protein-coupled P2Y receptors, which are activated by different purines and pyrimidines, among them ADP (Abbracchio et al., 2009). Although our understanding of the pathways of intercellular purinergic signaling is still limited, it is clear that purinergic signaling represents a main non-synaptic signaling mechanism in the normal and diseased CNS. Acting at purinergic receptors, extracellular purines can regulate neurotransmission and developmental events, such as cell migration or apoptosis, and they have been implicated in several central nervous system disorders (Burnstock, 2007;Burnstock, 2008). Recent studies have shown that inhibition of the ATP-gated P2X7 receptor improves recovery after spinal cord injury (Peng et al., 2009;Wang et al., 2004), promotes axonal growth in hippocampal neurons (Diaz-Hernandez et al., 2008) and induces N2a cell differentiation (Gomez-Villafuertes et al., 2009). The regulation of axonal elongation is an important feature during neuronal development and axonal recovery after axonal injury, in order to achieve functional neuronal connectivity. Neurotrophic factors, axon guidance molecules and neurotransmitter receptors play an essential role in this process (Huang and Reichardt, 2001;Mueller, 1999;Ruediger and Bolz, 2007). However, the potential role in nervous system development of purinergic receptors and more precisely that of dinucleoside polyphosphates, such as ADP, which can activate various P2Y receptors, has not yet been investigated. The coordination of purinergic receptors specific for different purines also requires further investigation. In the context of axonal growth improvement after axonal injury, we have employed a widely used model of embryonic cultured hippocampal neurons with the objective of understanding how different purines and purinergic receptors, such as P2X7, P2Y1 or P2Y13, can regulate axonal elongation and how they are coordinated. Because these receptors are widely expressed in neurons and glial cells, the study was performed in a widely used model of pure cultured hippocampal neurons in the absence of glial cells. We show that ADP promotes axonal elongation through the P2Y1 receptor, whereas the P2Y13 receptor exerts a negative effect on axonal elongation, as described previously for the ATP-gated P2X7 receptor (Diaz-Hernandez et al., 2008). All three receptors are expressed in the distal region of the axon and modulate signaling pathways that involve extracellular calcium influx (assisted by P2X7) and the Gq (P2Y1) and Gi (P2Y13) proteins. The coordinated action of these three signaling pathways contributes to the fine regulation of adenylate cyclase 5 (AC5, also known as ADCY5), which modulates cAMP levels and axon elongation by modulating, the PI3K-Akt-GSK3 pathway. ADP regulates axonal elongation in cultured hippocampal neurons through the P2Y1 and P2Y13 receptors In the light of previous results demonstrating how ATP modulates axon elongation through the ionotropic P2X7 receptor (Diaz-Hernandez et al., 2008), we analyzed whether ADP, acting through the metabotropic P2Y1 and P2Y13 receptors, can regulate axonal elongation. Both these receptors are expressed in the hippocampus and hippocampal neurons (supplementary material Fig. S1) and are specific for ADP (Burnstock, 2007). In our model of cultured hippocampal neurons, both receptors were found in the axon, with a more intense signal in the distal region of the axon (Fig. 1). The fluorescence of both P2Y1 and P2Y13 colocalized with microtubules in the distal region of the axon and with the actin region of growth cones. Both receptors were also detected in the neuronal soma and to a lesser extent in the future dendrites. antibodies against a-tubulin (blue) and Alexa-Fluor-594-phalloidin (red) to visualize neuronal morphology, and P2Y1 (A) or P2Y13 (B, green). Scale bar: 100 mm. Boxes indicate the area of the distal region of the axon magnified in the panels on the right. (C,D) Graphs of P2Y1 and P2Y13 fluorescence (means ± s.e.m.) in 10 mm sections along the length of the axon. Axons approximately 250 mm long were analyzed in five neurons in two separate experiments. (E) Boxplot showing the distribution of fluorescence intensities of P2Y1 and P2Y13 receptors in the proximal (0-120 mm) and distal (120-250 mm) regions of the axon. There was a significant increased in the fluorescence signal for both receptors in the distal region of the axon compared with the proximal region; ***P,0.001. The influence of ADP on axon elongation was examined in hippocampal neurons treated with increasing concentrations of ADP (1 nM, 100 nM, 500 nM, 1 mM, 5 mM or 10 mM) on the first day in vitro (1 DIV) until 3 DIV (Fig. 2B). ADP significantly increased axon length when compared with control untreated neurons (177.88±5.6 mm), with a maximum effect at 5 mM (433.55±18.69 mm) and an ED 50 of ,1 mM ( Fig. 2A,B). We then assessed the effect of ADP (5 mM) or 2-MeSADP (5 mM) on axon length, the latter is an ADP analogue with equal or greater specificity for P2Y1 and P2Y13 purinergic receptors ( Fig. 2A,C). Both 2-MeSADP and ADP enhanced axon growth to a similar extent (342.76±9.99 mm and 347.70±11.12 mm, respectively) compared with control neurons (150.47±3.42 mm). By contrast, the addition of 1 mM ATP to cultured hippocampal neurons in the same conditions retarded axonal growth (114.12±4.97 mm vs 174.25±6.98 mm in control neurons; Fig. 2A,C). Activation of P2Y1 and P2Y13 receptors by ADP exerts opposing effects on axon elongation To identify the role of each receptor in axon elongation neurons were cultured for 3 DIV and treated for the last 48 hours with the P2Y1 antagonist, MRS-2179 or the P2Y13 antagonist, MRS-2211 at different concentrations (supplementary material Fig. S2). Treatment with the P2Y1 antagonist MRS-2179 significantly retarded axonal growth (86.62±2.30 mm at 0.5 mM and 70.54±1.94 mm at 1 mM, respectively), and treatment with the P2Y13 antagonist MRS-2211 (1, 5 or 10 mM) significantly increased axonal growth (379.48±12.99 mm, 405.39±13.14 mm and 428.39±13.68 mm, respectively) compared with that of control neurons (150.47±3.42 mm), suggesting that the P2Y13 receptor negatively regulates axon elongation. Moreover, neurons cultured in the presence of MRS-2179 (1 mM) and ADP (5 mM) did not generate significantly longer axons (173.14±8.47 mm) than control neurons (156.92±5.69 mm), indicating that the influence of ADP on axon elongation is mediated by the P2Y1 receptor (Fig. 2D,E). Furthermore, the increase in axon length in the presence of both ADP and MRS-2211 was similar to that when either compound was used alone (Fig. 2D,E). The effects of the specific antagonists of P2Y1 and P2Y13 were confirmed by interference short hairpin RNAs (shRNAs) for P2Y1 or P2Y13 receptors (Fig. 3). The axons produced by neurons nucleofected with the scrambled (control) shRNA plasmid reached a length of 164.53±3.73 mm, whereas nucleofection with P2Y1 shRNA 1 or P2Y1 shRNA 2 resulted in lengths of 97.60±3.25 mm and 134.66±3.78 mm, respectively ( Fig. 3A,C). This reduction in axonal growth was correlated with a decrease in P2Y1 protein expression produced by each shRNA (Fig. 3B,J). Moreover, treatment with ADP (5 mM) promoted axonal growth in neurons nucleofected with the scrambled shRNA (395.67±10.57 mm) but not in those nucleofected with P2Y1 shRNA 1 (163.29±5.22 mm vs 223.53±5.37 mm in scrambled-shRNA-nucleofected neurons: Fig. 2. ADP modulates axon elongation through the P2Y1 and P2Y13 purinergic receptors. (A) Hippocampal neurons cultured in the presence or absence of ADP (5 mM), ATP (1 mM) or the ADP analogue 2MeSADP (5 mM), from day 1 to day 3 in vitro. Neurons were fixed at 3 DIV and stained with antibodies against MAP2 (red, somatodendritic) and Tau-1 (green, axon) to define the neuronal morphology and quantify axon length. (B) Axon length of hippocampal neurons at 3 DIV following treatment with increasing concentrations of ADP (0.1, 0.5, 1, 5 and 10 mM). The resulting curve revealed an ED 50 of 0.879 mM, with a maximum effect at 5 mM ADP. (C) Axon length at 3 DIV following treatment with ADP (5 mM), 2MeSADP (5 mM) and ATP (1 mM). Data are means ± s.e.m. from three independent experiments from 100 neurons analyzed in each experiment and condition. (D) Hippocampal neurons incubated with the P2Y1 antagonist (MRS-2179) or the P2Y13 antagonist (MRS-2211) from day 1 to day 3 in vitro, in the presence or absence of ADP (5 mM). Neurons were stained with MAP2 and Tau-1 antibodies, and the length of their axon was quantified. (E) Graphs of the mean axon lengths ± s.e.m. of the control and treated neurons shown in D. All data are the means ± s.e.m. from three independent experiments with 100 neurons analyzed in each experiment and condition; ***P,0.001. n.s., non-significant differences. Scale bars: 50 mm. In accordance with these results, the expression of P2Y1-GFP increased mean axon length to 324.24±9.24 mm (Fig. 3I,K), shRNA. Neurons were fixed at 3 DIV and stained with an anti-a-tubulin antibody. Nucleofected neurons were identified by their GFP fluorescence. (B) HEK-293T cells were co-transfected with GFP, P2Y1-GFP or P2Y13 plasmids, in combination with different P2Y1 or P2Y13 shRNAs. Data are means ± s.e.m. of three independent experiments. P2Y1-GFP and P2Y13 protein expression was normalized to a-tubulin expression levels; ***P,0.001. (C) Axon length of hippocampal neurons expressing scrambled shRNA, two different P2Y1 shRNAs or two different P2Y13 shRNAs was quantified after staining with antibodies against MAP2 and Tau-1. Data are mean axon lengths ± s.e.m. from three independent experiments, analyzing 100 neurons for each condition in each experiment; ***P,0.001. The dotted grey line indicates the mean axon length of scrambled-shRNA-nucleofected neurons. (D,E) Hippocampal neurons nucleofected with scrambled shRNA or P2Y1 shRNA and treated with ADP (5 mM) from day 1 to day 3 in vitro. The graph in D shows the axon length in nucleofected neurons (GFP-positive) incubated in the presence or absence of ADP. (F-H) Hippocampal neurons nucleofected with scrambled shRNA, P2Y1 shRNA or P2Y13 shRNA and treated with the P2Y1 antagonist (MRS-2179) or the P2Y13 antagonist (MRS-2211) from day 1 to day 3 in vitro. Scale bars: 50 mm. Note that in all cases P2Y1 expression and function is necessary for axon elongation. Data in G are the mean axon lengths ± s.e.m. from three independent experiments, analyzing 100 neurons for each condition in each experiment; ***P,0.001. H shows the distribution of the axon length for all neurons from three independent experiments for each condition (n5300). (I) Hippocampal neurons that had been nucleofected with plasmids expressing GFP, P2Y1-GFP and P2Y13. After 3 DIV neurons were stained for MAP2 and Tau-1 to identify the axon. (J) P2Y1 or P2Y13 mean fluorescence intensity along the axon in control, scrambled shRNA, P2Y1 shRNA or P2Y13 shRNA nucleofected neurons. (K) Graph of the mean axon lengths ± s.e.m. of neurons nucleofected with GFP, P2Y1-GFP or P2Y13 and GFP. Neurons were quantified in three independent experiments, analyzing 100 neurons for each condition in each experiment; ***P,0.001. Scale bars: 100 mm. Box-plot shows the distribution of axon lengths for all the neurons quantified in K. whereas co-nucleofection with plasmids expressing P2Y13 and GFP resulted in a mean axon length of 91.36±4.76 mm; the axon length of the control neurons was 164.53±3.73 mm. We analyzed the relative contributions of P2Y1 activation and P2Y13 inhibition to axon growth ( Fig. 3F-H) by nucleofecting neurons with the scrambled shRNA or P2Y13 shRNA 2 and treating them with the P2Y1 antagonist, MRS-2179. The increase in axon length following P2Y13 suppression (290.43±9.86 mm vs 181.70±6.96 mm in non-treated scrambled shRNA nucleofected neurons) was blocked by P2Y1 antagonism (137.08±4.67 mm). Similarly, the increase in axonal length seen in neurons treated with the P2Y13 antagonist, MRS-2211, was impaired in neurons expressing P2Y1 interference RNA 1 (127.27±4.14 mm). Taken together, these findings indicate that P2Y1 activation is necessary and sufficient to promote axon elongation, whereas P2Y13 activation might inhibit this effect. ADP-mediated axon elongation is dependent upon the status of P2X7 ATP negatively regulates axon growth through the activation of the P2X7 receptor, whereby the inhibition or suppression P2X7 enhances axon growth. The immunofluorescence pattern of P2X7 revealed the same localization that P2Y1 and P2Y13 receptors in the hippocampus as previously described ( Fig. 4E) (Diaz-Hernandez et al., 2008). To understand the relationship between P2X and P2Y purinergic receptors, hippocampal neurons were nucleofected with a construct expressing the P2X7 receptor coupled to GFP, or with GFP alone, and treated with ADP (5 mM) from day 1 to day 3 in vitro, as described above. Interestingly, ADP treatment increased axon length in GFPnucleofected neurons, but failed to do so in those expressing P2X7-GFP (Fig. 4A,B). In the same sense, axons of P2Y1-GFPexpressing neurons treated with ATP (1 mM) were shorter than those of untreated GFP-nucleofected neurons (Fig. 4A,B). P2X7 inhibition with Blue Brilliant G (BBG; 100 nM) reversed the slow axon growth seen in neurons expressing either P2Y1 shRNAs or the P2Y13 receptor, resulting in an increase in axon length (Fig. 4C), which was significantly lower than in scrambled shRNA-or GFP-nucleofected neurons treated with BBG. Furthermore, P2X7 shRNAs abolished the effects of inhibiting the P2Y1 receptor with MRS-2179, whereas P2X7-GFP expression impaired the axon growth-promoting effect of the P2Y13 antagonist, MRS-2211 (Fig. 4D). These results suggest that P2X7 activation by ATP blocks an ADP-mediated signaling mechanism that is regulated by P2Y1 and by P2Y13. Adenylate cyclase activity is necessary for the regulation of axonal elongation by P2Y1, P2Y13 and P2X7 To test the hypothesis that a common signaling pathway integrates P2Y1-, P2Y13-and P2X7-mediated signaling, we . On the basis of these findings, we next sought to identify common signaling components that can be regulated in opposite directions by Gi and Gq proteins, such as adenylate cyclases 1, 3, 5, 6, 8 and 9 (Willoughby and Cooper, 2007). We investigated the common signaling hypothesis in hippocampal primary cultures using four different experimental approaches (Fig. 5): (1) administration of an adenylate cyclase inhibitor (SQ-22536; 20 mM); (2) activation of adenylate cyclase with forskolin (5 mM); (3) administration of a competitive antagonist of cAMP-induced activation of PKA (cAMPS-Rp; Fig. 5. Adenylate cyclase activity is necessary for ADP-P2Y1-dependent axon elongation. (A) Hippocampal neurons treated with the indicated compounds from day 1 to day 3 in vitro and stained for MAP2 and Tau-1. Scale bar: 100 mm. (B) Axon length in neurons treated with vehicle (black bars) or the indicated adenylate cyclase or cAMP regulators (white bars), in combination with agonists or antagonists of P2Y1 or P2Y13. Graphs represent the mean axon length ± s.e.m. from three independent experiments, analyzing 100 neurons for each condition in each experiment; ***P,0.001, **P,0.01; n.s., not significant. (C,E) Hippocampal neurons nucleofected with P2Y1 shRNA and stained at 3 DIV for Tau-1 or a-tubulin (red). Nucleofected neurons were identified by GFP fluorescence. Neurons were treated with the adenylate cyclase activator forskolin (5 mM) or a PDE4 inhibitor (20 nM). Note that both treatments reversed the negative effects of P2Y1 silencing or P2Y13 expression on axon elongation. The graphs in E show the axonal lengths ± s.e.m. from three independent experiments, analyzing 100 GFP positive neurons for each condition in each experiment; ***P,0.001. (D,F) Neurons nucleofected with P2X7 shRNA or P2X7-GFP expression plasmids and treated from day 1 to day 3 in vitro with the adenylate cyclase inhibitor (SQ-22536) or the adenylate cyclase activator forskolin, respectively. Note that adenylate cyclase activation or increased cAMP levels reversed the negative effect of P2X7-GFP expression on axon elongation. The graphs in F show the axon length ± s.e.m. from three independent experiments analyzing 100 neurons for each condition in each experiment; ***P,0.001. Scale bars: 100 mm. 20 mM) or a PKA inhibitor (H-89; 5 mM); and (4) selective inhibition of the predominant phosphodiesterases in brain, PDE4B and PDE4D (Iona et al., 1998) with 3,5-dimethyl-1-(3nitrophenyl)-1H-pyrazole-4-carboxylic acid ethyl ester (20 nM). These treatments were administered in combination with different agonists or antagonists of P2Y1 and P2Y13 receptors (Fig. 5A,B), as well as in cells in which P2Y1 or P2X7 expression was suppressed with shRNAs, or enhanced with expression of P2Y13 or P2X7-GFP ( Fig. 5C-F). The axon elongation provoked by suppressing P2X7 was abolished by the adenylate cyclase antagonist SQ-22536 and by the cAMP antagonist cAMPS-Rp. Furthermore, the inhibitory effect of P2X7-GFP expression on axonal elongation was reversed by the administration of forskolin or the PDE4 inhibitor (Fig. 5D,F). Taken together, these results demonstrate that P2Y1 and P2Y13 receptors exert opposing regulatory effects on adenylate cyclase activity, and that P2X7 negatively regulates adenylate cyclase activity. These findings raise the question of whether the coordinated activity of these three receptors and their extracellular agonists can together modulate axon elongation through a common pathway involving adenylate cyclase, cAMP and PKA. Thus, we investigated the possibility that the AC5 adenylate cyclase isoform co-ordinates the activities of P2Y1, P2Y13 and P2X7 receptors during axon elongation. In hippocampal neurons, AC5 was found in the soma, as well as in a distal gradient in the axon and the actin region of the growth cone, and to a lesser extend in minor neurites, similar to the receptors P2Y1, P2Y13 and P2X7 ( Fig. 6B; Fig. 1) (Diaz-Hernandez et al., 2008). NKY80 (10 mM) is a highly selective inhibitor of AC5 (Onda et al., 2001) that impaired ADP-mediated axon elongation in hippocampal neurons at 1 to 3 DIV (170.92±14.04 mm vs 253.38±11.48 mm in ADP-treated neurons vs 201.34±7.62 mm in control neurons; Fig. 6C,E). Similarly, axon elongation following P2Y13 inhibition with MRS-2211 was abolished by NKY80, although it had no effect on axon elongation in neurons treated with 2 mM dbcAMP (Fig. 6E). Moreover, axon growth mediated by Gq activation (with rPMT) or Gi inhibition (with pertussis toxin) was impaired by AC5 inhibition (Fig. 6E). Next, neurons were nucleofected with plasmids expressing P2Y1-GFP, or interference shRNAs for P2X7 or P2Y13 and treated with NKY80 at 1 to 3 DIV (Fig. 6F,H). Although P2Y1-GFP expression promoted axon elongation, this effect was impaired by inhibiting AC5 with NKY80. Similarly, NKY80 attenuated the increases in axon elongation observed in neurons expressing P2X7 or P2Y13 shRNAs. Finally, neurons were nucleofected with a plasmid expressing a FRET-based biosensor to detect cAMP levels (ICUE3) (DiPilato and Zhang, 2009). Neurons were treated with ADP (5 mM) or the P2X7 antagonist BBG (100 nM), alone or in the presence of NKY80 (10 mM), which was added 30 minutes before treatment (Fig. 7). Both ADP and BBG increased by 20-30% cAMP levels in the distal region of the axon (Fig. 7D,E) compared with the normalized value before treatment (Fig. 7E). These increases were abolished by pre-treatment with NKY80. As a control, neurons were treated with forskolin in the presence or absence of NKY80. In both cases cAMP levels were similar to those observed following exposure to ADP or BBG. Moreover, inhibition of PKA, a cAMP-dependent kinase, abolished the increase in axon elongation produced by ADP or MRS-2211 (Fig. 7F). Taken together, these results demonstrate that adenylate cyclase 5 is a common mediator of the axon elongation regulated by P2Y1, P2Y13 and P2X7 receptors in response to extracellular ADP and ATP. ADP regulates the PI3K pathway Initial axon establishment was not impaired by P2Y1 shRNA, P2X7-GFP, P2Y13 expression or AC5 shRNA nucleofection, suggesting that these purinergic receptors act through a common signaling pathway to specifically regulate axon elongation. Enhanced axon growth following P2X7 inhibition appears to be mediated through the phosphoinositide 3-kinase (PI3K)-Akt-GSK3 pathway (Diaz-Hernandez et al., 2008) and, moreover, P2Y13 activity can regulate GSK3 phosphorylation (Ortega et al., 2008). Thus, we examined whether the effects of ADP in axon elongation were mediated by the PI3K pathway. In 3 DIV hippocampal neurons, Akt and GSK3 phosphorylation was augmented following acute administration of ADP (5 mM: Fig. 8F,G). When PI3K was inhibited by the presence of LY-294002 (10 mM) at 1 to 3 DIV, the axon elongation induced by ADP or MRS-2211 was suppressed in both cases (Fig. 8A,C). Hippocampal neurons expressing the P2Y13 receptor or treated with the P2Y1 antagonist, MRS-2179, were cultured from day 1 to day 3 in vitro in the presence of a GSK3 inhibitor (AR-A014418; 20 mM). The inhibition of GSK3 reversed the inhibitory effects of both P2Y1 inhibition and P2Y13 expression on axon elongation (Fig. 8B,D,E). Moreover, P2Y1 inhibition diminished the phosphorylation of Akt and GSK3 (Fig. 8F,G). All together, these results suggest a regulation of adenylate cyclase 5 by ADP, ATP, P2Y1, P2Y13 and P2X7 receptors, through G proteins and extracellular Ca 2+ entry. This modulates cAMP levels and PKA function, regulating the input of purinergic receptors signaling on the PI3K-Akt-GSK3 pathway (Fig. 8H). Discussion Purines and purinergic receptors have been implicated in a variety of physiological and pathological conditions, including neurotransmission, brain development, inflammation, pain, central nervous system injury, neuropsychiatric disorders and neurodegenerative diseases (Burnstock, 2007;Burnstock, 2008). Purinergic receptors have been identified at early stages of brain developmental when they regulate stem cell proliferation (Burnstock and Ulrich, 2011). However, their role in the development of neuronal morphology and axon growth remains largely unknown. We previously demonstrated that ATP, acting at the purinergic P2X7 receptor, negatively modulates axon growth, generating an increase in calcium at the distal axon (Diaz-Hernandez et al., 2008), whereas inhibition or suppression of this receptor promotes axon elongation. The present study demonstrates that ADP enhances axon growth in cultured hippocampal neurons through the activation of the metabotropic P2Y1 receptor. Both ADP and P2Y1 have been previously described as regulators of neurosphere proliferation (Mishra et al., 2006). P2Y1 is also necessary for proper migration of intermediate neuronal progenitors to the neocortical subventricular zone (Liu et al., 2008). Our results show that P2Y1 function is necessary for proper axonal elongation, which can be downregulated by the action of ADP at the P2Y13 receptor or the activation of the P2X7 receptor. The overall positive results in axon elongation promoted by ADP might be the result of the higher expression of P2Y1 than P2Y13 receptors, as shown in supplementary material Fig. S1. The contrasting effects of P2Y1 and P2Y13 activation on axon elongation are consistent with their opposing roles in the control of pain or insulin secretion (Amisten et al., 2010;Malin and Molliver, 2010). Both P2Y1 and P2Y13 are expressed in the distal region of the axon, similar to P2X7 (Diaz-Hernandez et al., 2008). Thus, in both normal and pathological conditions, variations in P2Y1, P2Y13 or P2X7 membrane expression, or extracellular ATP and ADP concentrations, can modulate axon elongation. In this sense, the different ectonucleotidase families that modify the number of phosphates in adenine nucleotides could be important regulators (Langer et al., 2008). For example, expression in cultured neurons of tissue non-specific alkaline phosphatase (TNAP), which reduces extracellular ATP levels, improves axonal growth (Diez-Zaera et al., 2011). Our results suggest that axon elongation is regulated through the coordination of the ionotropic ATP-gated P2X7 receptor and ADP-activated P2Y receptors. Thus, in the absence of elevated concentrations of extracellular ATP, P2Y1 receptors can potentiate axonal elongation, and this action might be partially controlled by P2Y13 receptors and P2X7 receptors. However, after different kinds of 'acute' CNS injury (e.g. ischemia, hypoxia, mechanical stress, axotomy), extracellular ATP can reach high concentrations, up to the millimolar range, flowing out from cells into the extracellular space, exocytotically, by transmembrane transport, or as a result of cell damage (Franke and Illes, 2006). In that case, P2X7 activation by high concentrations of extracellular ATP can negatively modulate axonal elongation. In the case of acute spinal cord injury, P2X7 inhibition substantially improves functional recovery and diminished cell death in the peritraumatic zone (Wang et al., 2004). Moreover, P2 purinergic receptors are also involved in neurodegenerative diseases. In the case of Alzheimer disease, P2X7 receptor expression is upregulated (Parvathenani et al., 2003) and P2Y1 receptor expression shows an altered distribution in human AD brain (Moore et al., 2000). The modulation of axon elongation by three distinct purinergic receptors and varying concentrations of their agonists suggests that their regulatory effects are coordinated at a common intracellular checkpoint. cAMP is a second messenger with regulatory effects on axon formation and elongation (Shelly et al., 2010). Adenylate cyclase is implicated in multiple pathways that modulate axonal growth, both positively and negatively, in developmental and pathological conditions. For example, elevation of intracellular cAMP levels by dbcAMP or inhibition of PDE4, can overcome myelin inhibition both in vitro and in vivo (Lu et al., 2004;Nikulina et al., 2004;Pearse et al., 2004). Our results show that P2Y1 activation and P2X7 inhibition both increase cAMP levels at the distal region of the axon. One adenylate cyclase isoform, AC5, is regulated both by G proteins (Willoughby and Cooper, 2007) and submicromolar concentrations of Ca 2+ (Cooper, 2003;Guillou et al., 1999). Inhibition of AC5 blocks the axon elongation promoted by activation of P2Y1 receptor or Gq, or inhibition of Gi proteins, and it impairs the axon elongation produced by ADP or P2X7 inhibition. Gq proteins activate phospholipase C (PLC) that in turn can activate PKCs, including PKCf (van Dijk et al., 1997). Moreover, PKCf associates to Gq upon G-protein-coupled receptor activation (Garcia-Hoz et al., 2010). This activation of PKCf can stimulate AC5 activity (Kawabe et al., 1994;Willoughby and Cooper, 2007). Indeed, we demonstrate that inhibition of PKCf impairs the axon elongation produced by ADP, P2Y13 inhibition or P2X7 inhibition. The reduction in axonal elongation produced by PKCf inhibition is avoided by adenylate cyclase activation, placing PKCf upstream of adenylate cyclase 5, and being activated by the P2Y1 receptor. The activation of AC5 by PKCf is, in turn, regulated in the opposite way by P2Y13, Gi and P2X7. In the case of P2X7 inhibition, retinoic-acid-induced differentiation of N2a cells has been shown to decrease P2X7 receptor levels (Wu et al., 2009) but also, to increase AC5 mRNA levels during P19 cell differentiation (Lipskaia et al., 1997). AC5 mRNA is expressed in the striatum and hippocampus of the adult brain (Kheirbek et al., 2009), and it increases in the hippocampus during postnatal development up to day 14 (Matsuoka et al., 1997). Studies in AC5 knockout mice demonstrate that AC5 is involved in a signaling pathway in corticostriatal plasticity and striatumdependent learning. Moreover, loss of AC5 compromises the ability of both contextual and discrete cues to modulate instrumental behavior (Kheirbek et al., 2010;Kheirbek et al., 2009), and the AC5 knockout mice show striking anxiolytic and antidepressant phenotypes in standard behavioral assays (Krishnan et al., 2008). Finally, AC5-knockout mice have markedly attenuated pain-like responses in neuropathic pain models (Kim et al., 2007), in accordance with the proposed roles of P2Y1 and P2Y13 in pain (Malin and Molliver, 2010). Our results demonstrate that AC5 protein is found in the soma and shows an increasing distal gradient along the axon and axonal growth cone. This distribution of AC5 in the axon mirrored the expression of the three purinergic receptors of interest in hippocampal neurons, and the localization of adenylate cyclase (Mizuhashi et al., 2001) and PKA (Sato et al., 2002). Signaling through each of the three purinergic receptors can enhance or diminish the velocity of axon growth, but in no case was axon formation impaired. The co-ordination of these signaling pathways by adenylate cyclase 5 could be a key means to regulate a main pathway involved in axon growth and neuronal connectivity. Taken together with our previous findings, our results indicate that activation of P2Y1 potentiates the PI3K-Akt-GSK3 pathway to promote axon elongation, whereas ATP exerts an opposing effect, downregulating the signaling through this pathway and inhibiting axon growth. ADP activation of P2Y1 increases Akt and GSK3 phosphorylation while promoting axon elongation, an effect previously demonstrated following P2X7 inhibition (Diaz-Hernandez et al., 2008). This enhanced elongation can be impaired by inhibiting PI3K, whereas the attenuation of axon growth by P2Y1 inhibition can be counteracted by inhibiting GSK3, and such inhibition has been previously shown to induce axonal elongation (Garrido et al., 2007). Further studies will be necessary to fully elucidate the complex signaling network that unites adenylate cyclase 5, cAMP and PKA, and the PI3K-Akt-GSK3 pathway. The complex coordination of purines and purinergic receptors described in our work do not exclude that other parallel signaling pathways regulated by these receptors might also be involved in the regulation of neuronal development and function. However, on the basis of the present findings, we can propose a model whereby two purine nucleotides, ATP and ADP, control the activity of three purinergic receptors, P2Y1, P2Y13 and P2X7, in response to changes in extracellular purine concentrations in both normal and pathological conditions. These interactions positively or negatively influence adenylate cyclase 5 activity, thereby modulating the capacity of PKA and neurotrophic factors to activate the PI3K-Akt-GSK3 pathway. Accordingly, the development of P2Y1-specific agonists, combined with specific antagonists of P2X7 and P2Y13, might provide an efficient means of treating the brain diseases associated with the reduction of synaptic contacts, neuronal death or axonal degeneration, by promoting the arrival of axons and new synapse generation. Cell culture Hippocampal neuronal cultures were prepared as described previously (Banker and Goslin, 1988). Briefly, the hippocampus was removed from E17 mouse embryos and after dissection and washing three times in Ca 2+ -and Mg 2+ -free Hank's balanced salt solution (HBSS), the tissue was digested in the same solution containing 0.25% trypsin for 15 minutes at 37˚C. The hippocampi were then washed again three times in Ca 2+ -and Mg 2+ -free HBSS and dissociated with a firepolished Pasteur pipette. The cells were counted, resuspended in plating medium (MEM, 10% horse serum, 0.6% glucose) and plated at a density of 5000/cm 2 on polylysine-coated coverslips (1 mg/ml). Neurons were incubated at 37˚C for 2 hours before switching them to neuronal culture medium (Neurobasal, B-27, glutamax-I). To analyze the effect of P2Y receptor agonists and antagonists, the compounds were added to the cultured neurons 1 day after plating at the concentrations indicated, maintaining them for a further 48 hours. For biochemical experiments hippocampal neurons were plated at a density of 200,000/cm 2 on 60 mm plates coated with polylysine (0.5 mg/ml). Before plating, the different plasmids were nucleofected into the neurons using the Amaxa nucleofection kit according to the manufacturer's instructions for hippocampal neurons. Only scattered glial cells appeared after 3 DIV, and our neuronal cultures were 99% pure. HEK-293T cells were maintained in DMEM (Gibco) supplemented with 10% (v/v) fetal calf serum (FCS). The cells were resuspended 1 day before transfection, and plated at a density of 10 5 cells/cm 2 , and they were maintained in medium containing 0.5% FCS. HEK-293T cells were transfected with Lipofectamine 2000 (Invitrogen) according to manufacturer's instructions. Plasmids Full-length human P2Y1 cDNA (cDNA clone number MGC: BC074784; Geneservice Ltd) was subcloned into the EcoRI and BamHI sites of the mammalian pEGFP-N1 expression vector after PCR amplification with the primers: 59-CTAGGAATTCATGACCGAGGTGCTGTGGCC-39 and 39-CTAGGGATCCGGCAGGCTTGTATCTCCATTCT-59. The full-length human P2Y13 cDNA was purchased from Open Biosystems (cDNA clone number: BC041116). P2Y1 and P2Y13 receptor knockdown was achieved using RNA interference (RNAi), applying a vector-based shRNA approach. The shRNA target sequences 59-GCTGTGTCTTACATCCCTTTC-39 or 59-GCATCTCCGTGTAC-ATGTTCA-39 were selected for P2Y1, and 59-CCTTTCCGACTCACACCTT-39 or 59-CAGCTGTTTATTGCTAAA-39 for the P2Y13 receptor, in accordance with a previously reported rational design protocol (Reynolds et al., 2004). The P2X7 expression plasmids and shRNA used here have been described previously (Diaz-Hernandez et al., 2008). As a control we used the firefly luciferase-targeted oligonucleotide, 59-CTGACGCGGAATACTTCGA-39. Synthetic forward and reverse 64-nucleotide oligonucleotides (Sigma Genosys) were designed, annealed and inserted into the BglII-HindIII sites of the pSUPER.neo.GFP vector (OligoEngine, Seattle, WA) following the manufacturer's instructions. Nucleofected neurons were identified by the expression of green fluorescent protein (GFP) from this vector. The adenylate cyclase 5 interference shRNAs (79 and 84) and control scrambled shRNA were purchased from Origene (TG506651). FRET imaging and analysis Hippocampal neurons were nucleofected with a plasmid expressing the FRET biosensor ICUE3 (DiPilato and Zhang, 2009). Neurons were cultured for 2 DIV and examined with a C9100-02 CCD camera (Hamamatsu) and an Axiovert200 Zeiss microscope, using a 75W/2 Xenon XBO lamp and a 406/1.3 NA objective (Zeiss). The excitation wavelength used was 422-432 nm, and emission wavelengths were separated with a double dichroic filter (440-500 nm and 510-610 nm), with 460-500 nm and 528.5-555.5 nm emission filters for CFP and YFP fluorescence, respectively. Images were collected and analyzed with Metamorph 7.1 r2 software (Universal Imaging) and live images were acquired for 120-140 mseconds at 15-second intervals. For global manipulation of cAMP signaling, pharmacological agents were applied to the bath after 150 seconds of baseline recording. The intensity of the CFP and YFP fluorescence was measured at the distal region of the axon of hippocampal neurons using ImageJ software. For the ratiometric FRET analysis, the background was subtracted from the CFP and YFP signals of the defined distal region of the axon (background intensity was calculated from a cell-free region using ImageJ software), which were then normalized to the control value (averaged over 150 seconds of baseline recording), and the FRET value was calculated as the ratio of the YFP:CFP signal. Neuronal medium was switched to Neurobasal medium without Phenol Red 30 minutes prior to analysis. The concentrations of pharmacological agents applied to the bath were as follows: ADP, 5 mM; BBG, 100 nM; and forskolin, 5 mM. These treatments were applied either alone or following a 30-minute preincubation with NKY80 (10 mM) prior to image acquisition, and the compounds tested remained present throughout the experiment. Immunocytochemistry Neurons were cultured for 3 DIV followed by fixation in 4% paraformaldehyde for 20 minutes. Non-specific binding was blocked with 0.22% gelatin and 0.1% Triton X-100 in 0.1 M phosphate buffer. Cells were then incubated with primary antibodies for 1 hour at room temperature, washed and incubated with Alexa-Fluor-conjugated secondary antibodies (1:1000) and Alexa-Fluor-594-conjugated phalloidin (1:100). The coverslips were mounted using Fluoromount G (Southern Biotech) and images acquired on a LSM510 confocal microscope coupled to an Axiovert 200M (Zeiss) microscope. Axon length and ramifications was analyzed with the NeuronJ program and the fluorescence intensity was evaluated using the RGB color profiler tool of the ImageJ software. Images were processed and presented using Adobe Photoshop and Illustrator CS3. Statistics All experiments were repeated at least three times and the results are presented as the means ± s.e.m. or as a box-plot showing the distribution of axonal length of all neurons from at least three experiments. Axon length was quantified in at least 100 neurons for each condition and experiment, and all axons were identified as Tau-1positive processes. Statistical differences were determined by ANOVA using the SigmaPlot software.
8,458.8
2012-01-01T00:00:00.000
[ "Biology" ]
Mechanism of Dephosphorylation of Glucosyl-3-phosphoglycerate by a Histidine Phosphatase* Background: Molecular basis for the dephosphorylation step in biosynthesis of methyl glucose lipopolysaccharides (MGLPs) of Mycobacterium tuberculosis is unknown. Results: Structures of unliganded, vanadate-bound, and phosphate-bound glycosyl-3-phosphoglycerate phosphatase (GpgP) reveal pivotal conformational changes in enzyme during dephosphorylation. Conclusion: Dimerization and maneuvers of Loop2 play essential roles during dephosphorylation. Significance: We present first structures of histidine phosphatase-type GpgP and explain the mechanism of catalysis. Mycobacterium tuberculosis (Mtb) synthesizes polymethylated polysaccharides that form complexes with long chain fatty acids. These complexes, referred to as methylglucose lipopolysaccharides (MGLPs), regulate fatty acid biosynthesis in vivo, including biosynthesis of mycolic acids that are essential for building the cell wall. Glucosyl-3-phosphoglycerate phosphatase (GpgP, EC 5.4.2.1), encoded by Rv2419c gene, catalyzes the second step of the pathway for the biosynthesis of MGLPs. The molecular basis for this dephosphorylation is currently not understood. Here, we describe the crystal structures of apo-, vanadate-bound, and phosphate-bound MtbGpgP, depicting unliganded, reaction intermediate mimic, and product-bound views of MtbGpgP, respectively. The enzyme consists of a single domain made up of a central β-sheet flanked by α-helices on either side. The active site is located in a positively charged cleft situated above the central β-sheet. Unambiguous electron density for vanadate covalently bound to His11, mimicking the phosphohistidine intermediate, was observed. The role of residues interacting with the ligands in catalysis was probed by site-directed mutagenesis. Arg10, His11, Asn17, Gln23, Arg60, Glu84, His159, and Leu209 are important for enzymatic activity. Comparison of the structures of MtbGpgP revealed conformational changes in a key loop region connecting β1 with α1. This loop regulates access to the active site. MtbGpgP functions as dimer. L209E mutation resulted in monomeric GpgP, rendering the enzyme incapable of dephosphorylation. The structures of GpgP reported here are the first crystal structures for histidine-phosphatase-type GpgPs. These structures shed light on a key step in biosynthesis of MGLPs that could be targeted for development of anti-tuberculosis therapeutics. The resilience and pathogenicity of Mycobacterium tuberculosis (Mtb) 4 has been partly ascribed to its unique cell envelope that is made up of a remarkable mixture of polysaccharides and lipidic moieties, some of which are found exclusively in mycobacteria (1)(2)(3)(4). Among the various components that make up the cell wall, mycolic acids impart structural integrity to the cell wall. The synthesis of mycolic acids is regulated by methylglucose lipopolysaccharides (MGLPs) via their association with and modulation of activity of the fatty acid synthase I complex (5). In this context, MGLPs have been studied vigorously and have been shown to be physiologically important. The significance of MGLPs in Mtb is further underscored by the fact that many enzymes involved in the biosynthesis of MGLPs are essential for the survival of the bacterium (6). MGLPs constitute complexes of 6-O-methyl glucose lipopolysaccharides and long chain fatty acid molecules mixed in 1:1 stoichiometry. The biosynthesis of MGLPs requires the concerted action of many enzymes (Fig. 1A) (7)(8)(9)(10). These include enzymes necessary for the transfer of glucosyl, methyl, and acetyl groups, as well as accessory enzymes like glucoside hydrolases and phosphatases. Although some of these enzymatic activities have been characterized, the identity of a number of enzymes catalyzing discrete steps in the biosynthesis of MGLPs is currently unknown. The first enzymatic activity proposed for the pathway leading to the biosynthesis of MGLP is the glucosyl-3-phosphoglycerate synthase activity that fuses * This work was supported by Grants 2013CB733904 and 2014CB542800 the glucosyl moiety of UDP-glucose with D-3-phosphoglyceric acid. The resultant glucosyl-3-phosphoglycerate (GPG) is subjected to dephosphorylation by a recently identified GPG phosphatase (GpgP) to produce glucosyl glycerate (GG) (Fig. 1A) (11). In the next step, two molecules of GG are linked together by di-glucosyl glycerate synthase to produce di-glucosyl glycerate. A glucosyl transferase extends the di-glucosyl glycerate moiety further via formation of ␣-(134) linkages. The hexose units of the polymer are modified with position specific methyl and acetyl groups (12,13). However, the identity of many of these enzymes catalyzing methyl and acetyl transfers is currently unknown. Recently, Rv2419c was shown to dephosphorylate GPG with high specificity and hence was annotated as the GpgP catalyzing the second step in the pathway for the biosynthesis of MGLPs (Fig. 1B) (14). Unlike halo acid dehalogenase-like phosphatases that require a metal ion for catalysis (11,(15)(16)(17), Mtb-GpgP carries out metal ion-independent dephosphorylation. The enzyme harbors a characteristic RHG motif and therefore belongs to the histidine phosphatase superfamily (18). Catalysis by members belonging to this family proceeds via the formation of a phosphohistidine intermediate (18). The phosphate group from the substrate is transferred to the catalytic histidine of the enzyme. A glutamate residue has been known to play the role of proton donor during this phosphorylation. The same residue accepts a proton during dephosphorylation of the histidine. Although MtbGpgP exhibits the RHG motif, its primary sequence shares only 31% identity with its closest known homologue, PhoE, a promiscuous phosphatase from Bacillus stearothermophilus. Therefore, we sought to obtain a structural view of the protein to find out how MtbGpgP differs structurally from its homologues. The structure was also likely to explain the molecular basis for conversion of GPG to GG by MtbGpgP, which is currently unknown. Here, we describe the crystal structures of apo-, vanadatebound, and phosphate-bound MtbGpgP representing the unliganded, reaction intermediate mimic, and product-bound views of the enzyme. Comparison of the structures reveals pivotal conformational changes in a loop region located in proximity of the active site, providing insights into maneuvers of structural elements during the course of catalysis. Structureguided site-directed mutagenesis and results of activity assays of mutants have helped identify amino acids essential for catalysis. The structures together with mutagenesis and biochemical data provide a framework for understanding the dephosphorylation of GPG by MtbGpgP. EXPERIMENTAL PROCEDURES Cloning, Expression, and Purification-The open reading frame corresponding to Rv2419c (MtbGpgP) was amplified from the genomic DNA of H37Rv strain of Mycobacterium tuberculosis by PCR and inserted between the BamHI and XhoI restriction sites of pET28a vector (Novagen). This construct expressed protein with an N-terminal His 6 tag. Point mutations (R10A, N17A, Q23A, R60A, E84Q, H159A, and L209E) were introduced into this construct using QuikChange site-directed mutagenesis kit (Invitrogen) following the manufacturer's instructions. All the constructs were verified by sequencing the entire gene prior to expression. MtbGpgP was overexpressed in Escherichia coli BL21 (DE3) (TransGen Biotech). Cells were grown aerobically at 37°C in Luria-Bertani medium supplemented with 100 g/ml kanamycin. When the cell density (A 600 nm ) reached 0.6, the culture was first cooled for 2 h at 4°C and then induced at 16°C for 18 h with 0.5 mM isopropyl ␤-D-thiogalactoside. The cells were harvested by centrifugation at 8,000 ϫ g for 15 min. The cell pellet was resuspended in McAc0 buffer (25 mM Tris-HCl, 500 mM NaCl, pH 8.0) and lysed by sonication. Unbroken cells and debris were removed by spinning the lysate at 12,000 ϫ g for 40 min. Supernatant containing soluble target protein was then loaded onto a nickel-nitrilotriacetic acid column (GE Healthcare) previously equilibrated with McAc0 buffer. After thorough washing with buffer, protein bound to the column was eluted with McAc500 buffer supplemented with 500 mM imidazole. Imidazole was removed by size exclusion chromatography. A Superdex G200 column (GE Healthcare) equilibrated with 25 mM HEPES, 100 mM NaCl, pH 7.5, was used for size exclusion chromatography. The protein was purified further by an anion exchange chromatography step using a Resource Q column (GE Healthcare). Protein bound to the column was eluted using a linear gradient of 1 M NaCl. Fractions containing protein were analyzed by SDS-PAGE. Fractions exhibiting a single band on SDS-PAGE corresponding to the molecular weight of the protein were pooled, concentrated to 15-20 mg/ml, and stored at Ϫ20°C until further use. Mutants of MtbGpgP were expressed and purified using the same protocol as the wild type protein. The purity of the proteins was estimated to be Ͼ95% as judged by SDS-PAGE analysis. Crystallization and Data Collection-Crystals of MtbGpgP were grown at 293 K using the sitting drop vapor diffusion technique. 1 l of protein solution (20 mg/ml, 25 mM HEPES, 150 mM NaCl, pH 7.5) was mixed with 1 l of reservoir solution and equilibrated over 100 l of reservoir solution. Complexes of MtbGpgP were prepared by mixing the enzyme with ammonium metavanadate and p-nitrophenyl phosphate (pNPP), respectively. Crystals for apo-MtbGpgP, MtbGpgP-VO 3 , and MtbGpgP-PO 4 were obtained under the following conditions: 0.1 M HEPES, pH 7.5, 10% 2-propanol, 28% (w/v) polyethylene glycol 4000; 1.26 M sodium phosphate monobasic monohydrate, 0.14 M potassium phosphate dibasic, pH 5.6; and 0.1 M HEPES, pH 7.5, 10% 2-propanol, 20% (w/v) polyethylene glycol 4000, respectively. Crystals were cryoprotected by adding 20% glycerol to the crystallization solution before being flash frozen in liquid nitrogen. X-ray diffraction data for MtbGpgP were collected at Beamline BL17U of Shanghai Synchrotron Radiation Facility. Data sets for the MtbGpgP-VO 3 and MtbGpgP-PO 4 complexes were collected on Beamline 5A of Photon Factory (Japan). All of the data were processed with the HKL2000 suite of programs (19) (see Table 1). Phasing, Model Building, and Refinement-The crystal structure of MtbGpgP was solved by molecular replacement with BALBES software suite (20) using the structure of PhoE (Protein Data Bank code 1H2E) as the search template. Automated model rebuilding by ARP/wARP was then performed with the structure solution obtained with BALBES (21). The atomic model was refined to an R free value of 0.247 by iterative cycles of refinement involving manual model adjustment with Coot (22) and Phenix.refine (23). For the complex of MtbGpgP-VO 3 and MtbGpgP-PO 4 , the native structure was used as a search template for molecular replacement with Phaser (24). Ligand fitting into the corresponding difference electron density maps calculated from the precalculated map coefficients after Phenix.refine runs were carried out by Phenix.ligandfit (25). Both complex structures were finalized by several rounds of manual building in Coot and refinement using Phenix.refine. Electron density for AA 197-198 of chain A and AA 17-19 and 197-199 All structures were judged to have good stereochemistry according to the Ramachandran plot calculated by MolProbity (26). A summary of the data collection, phasing, and structure refinement statistics is listed in Table 1. Enzyme Assays-Phosphatase activity of wild type and mutant MtbGpgP was estimated using pNPP as substrate. The reaction mixture consisted of 20 mM Bis-Tris HCl, 2.5 mM MgCl 2 , 3 mM pNPP, pH 7.0, and pure MtbGpgP or its mutant (1 mg/ml). Incubation was carried out at 37°C for 15 min. The amount of p-nitrophenol released was measured by reading the absorbance at 405 nm as described previously (27). Multiangle Light Scattering Analysis-Multiangle light scattering (MALS) analyses were performed at room temperature and coupled up with size exclusion chromatography using an 18-angle DAWN HELLOS II instrument equipped with an OptilabrEX refractive index detector (28). All samples were diluted to 3 mg/ml and injected into a Superdex 200 10/300 GL column (GE Healthcare) equilibrated with 25 mM HEPES, 150 mM NaCl, pH 7.5. Calibration of the light scattering detector was performed with albumin monomer standards before conducting the assays. The data were analyzed using the ASTRA software (28). RESULTS Overall Structure of MtbGpgP-Recombinant MtbGpgP was expressed in E. coli with an N-terminal His 6 tag. The enzyme was purified to homogeneity using a number of chromatography steps like nickel affinity, ion exchange, and gel filtration. The enzyme was active when tested for phosphatase activity. This enzyme was used for structural studies. We solved the crystal structures of the apo-form of GpgP (apo-MtbGpgP), the complex of GpgP with a transition state mimic, vanadate (Mtb-GpgP-VO 3 ), and the complex of GpgP with one of the reaction products, orthophosphate (MtbGpgP-PO 4 ). These structures were refined to 1.95, 2.30, and 1.77 Å resolution, respectively. The structure of the apo-enzyme was solved by molecular replacement using the structure of a phosphatase from B. stearothermophilus (29) (PhoE; Protein Data Bank code 1H2E) as the search template. PhoE shares 31% sequence identity with MtbGpgP. Subsequent structures of the binary complexes were solved by molecular replacement using the structure of the apo-enzyme as the search template. The final model of all three structures consists of residues 3-215 and exhibits good stereochemistry. Intriguingly, electron density for amino acids 197-200 was missing for all the chains, except one. These amino acids are part of a loop connecting strand ␤4 with ␤5. Notably, this region is protruding out of the protein but is located far away from the bound ligands. There are two protomers in the asymmetric unit, suggesting the possibility of a dimer being the minimal functional unit of MtbGpgP. The overall structures of apo-MtbGpgP, MtbGpgP-VO 3 , and MtbGpgP-PO 4 are similar, with a root mean square deviation (RMSD) of less than 0.75 Å between the C␣ atoms. Data collection and refinement statistics for the structures are listed in Table 1. MtbGpgP consists of a single domain with a central twisted ␤-sheet that is flanked by ␣-helices on either side ( Fig. 2A). The ␤-sheet is made up of five ␤-strands. Strands ␤1, ␤2, ␤3, and ␤5 run parallel, whereas strand ␤4 runs anti-parallel. Strand ␤5 is positioned at the dimer interface. Each protomer of MtbGpgP contains seven ␣-helixes. The arrangement of the structural elements is reminiscent of the classical ␣/␤/␣ sandwich architecture of the canonical cofactor-dependent phosphoglycerate mutase (dPGM) fold (29,30). Although the last five amino acids are missing in all the structures of MtbGpgP, the C-terminal tail is unlikely to extend into the active site as observed for PhoE and other dPGMs (29,31,32). A search for structural homologues using the Dali server (33) revealed that the overall structure of MtbGpgP is similar to the PhoE phosphatase (Table 2) (Protein Data Bank code 1EBB; Z score 24.5). The C ␣ atoms of the two structures could be superimposed with an RMSD of 2.1 Å for 197 matching residues (Fig. 2B). Other significant structural matches included phosphoserine phosphatase 1 (34) (PsP1; Protein Data Bank code 4IJ5) and phosphoglycerate mutase (35) (PGM; Protein Data Bank code 3EZN). Although Psp1 shares only 29% sequence identity with MtbGpgP, the overall structure of both of these enzymes is similar (Fig. 2B). In addition, the mode of dimerization observed in the crystal structures of the two enzymes is also similar. However, unlike MtbGpgP, the exact substrate of this phosphatase is currently not known (14). In contrast to PsP1, the substrate specificities of PGMs are known. They catalyze the conversion of 3-phosphoglycerate to 2-phosphoglycerate with high specificity (31). This sets them apart from MtbGpgP, which shows greater dephosphorylation activity against GPG when compared with its PGM activity. Despite the differences in the reactions catalyzed, these two enzymes share a common fold, which is evident from a high Z score of 22.4 and a low RMSD of 2.2 Å for 195 matching C ␣ atoms when the two structures were superimposed (Fig. 2B). In all the structures of MtbGpgP, two protomers present in the asymmetric unit are observed forming a dimer that involves the participation of the terminal ␤5 strand of each protomer (Fig. 2C). The strands lie adjacent to each other, giving the appearance of a contiguous 10-stranded ␤-sheet traversing the dimer interface. The two protomers interact extensively via homotypic interactions. Most of these interactions involve amino acids from strand ␤5 of each protomer (Fig. 2D). Although the interactions are primarily hydrophobic, Arg 110 , Glu 111 , Arg 206 , and Arg 208 located at the dimer interface contribute ionic interactions. Dimerization results in burial of a total of 1,600 Å 2 of solvent-accessible surface area per subunit. MALS analysis of MtbGpgP suggested that the protein exists as a dimer in solution. Thus, MtbGpgP forms a homotypic dimer with the active sites of the monomers located in diagonally opposite clefts (Fig. 2C). Structural View of the Phosphohistidine Transition State Mimic-Vanadate compounds have been used previously to study catalytic mechanism of phosphatases (29). Therefore, to gain insights into the mechanism of the MtbGpgP catalyzed reaction, we cocrystallized MtbGpgP with ammonium metavanadate. Vanadate was expected to form a covalent bond with the catalytic histidine mimicking the characteristic phosphohistidine reaction intermediate of histidine phosphatases. As observed for other phosphatases previously (29), the structure of the binary complex of MtbGpgP with vanadate shows vanadate covalently linked to His 11 (Fig. 3A). Vanadate is entrenched in a cavity above the central ␤-sheet (Fig. 3B). Notably, the cavity is shielded by two long loops. Loop 2, connecting ␤1 with ␣1, towers over vanadate, whereas loop 5 connecting ␤3 with ␣4 is located adjacent to loop 2 (Fig. 3B). Together, these two loops are observed partially covering the active site and demarcating a large boundary of the active site pocket. One molecule of vanadate could be modeled into the electron density. Interestingly, the density merged with that of the aromatic ring of His 11 (Fig. 3A). The vanadate bond to His 11 mimicked the phosphohistidine reaction intermediate during catalysis. Similar covalent tethering of vanadate by a histidine has been observed before for PhoE (29). Other residues interacting with the vanadate moiety include Arg 10 , Asn 17 , Gln 23 , Arg 60 , Glu 84 , and His 159 (Fig. 3C and Table 3). Notably, side chains of Gln 23 and Asn 17 move inwards to make contact with vanadate. As a result, the position of loop 2 is different during the transition state than that assumed by the enzyme in absence of substrate. Mutating either of these residues to alanine reduced the enzyme activity dramatically, indicating an important role for this maneuver of loop 2 during catalysis (Fig. 3D). Further, mutating Arg 10 , Thr 14 , Arg 60 , or His 159 to alanine almost completely abolished the enzymatic activity, revealing the essentiality of these amino acids in dephosphorylation of the substrate (Fig. 3D). The structure of vanadate-bound MtbGpgP provides crucial mechanistic insights and helps identify key residues for dephosphorylation of GPG. Structure of Inorganic Phosphate-bound MtbGpgP-Mtb-GpgP shows low activity against pNPP (14). To gain further insights into the mechanism of catalysis, we cocrystallized pNPP with MtbGpgP. The substrate seems to have been dephosphorylated by MtbGpgP during the course of the crystallization. Electron density for only orthophosphate noncovalently bound to MtbGpgP, depicting the post catalytic state of the enzyme, was observed (Fig. 4A). The position of the phosphate overlaps with the position of vanadate. However, the aro- matic ring of His 11 has stepped back by 1.5 Å after the dephosphorylation and is now observed forming a hydrogen bond with the oxygen atom of the phosphate (Fig. 4B). Table 4. Thus, the structure of the binary complex of MtbGpgP with phosphate validates inferences about the location of the active site and the identity of residues involved in catalysis. In addition, it reveals that loop 2 undergoes a pivotal conformational change, possibly to assist product release or to bind substrate. A Dimer Is the Minimal Functional Unit of MtbGpgP-Previously, MtbGpgP was shown to exist as dimers in solution (14). Our crystal structures of MtbGpgP provide information about the location and nature of the dimer interface. Based on these structures, amino acids of the dimer interface were selected and mutated to break the dimer interface. Estimation of the activity of the resulting monomeric MtbGpgP was expected to shed light on the essentiality of dimerization of MtbGpgP for dephosphorylation of its substrate. Among the point mutations tested, L209E mutation disrupted dimerization of the enzyme (Fig. 4C). MALS analysis of the mutant and the wild type enzyme under identical conditions revealed that the L209E mutant of MtbGpgP existed as a monomer in solution (Fig. 4D). Leu 209 is located in the middle of strand ␤5 that mediates dimerization. However, this residue is located far away from the active site, and hence the L209E mutation is unlikely to have any direct effect on the integrity of the active site. Estimation of the enzymatic activity indicated that the monomeric MtbGpgP had lost its ability to perform dephosphorylation (Fig. 3D). Taken together, results of our structural and mutagenesis studies clearly show that dimerization of MtbGpgP is essential for its dephosphorylation activity. A dimer constitutes the minimal functional unit of MtbGpgP. DISCUSSION Monomerization of MtbGpgP abolished enzymatic activity. This is in stark contrast to the structural homologue closest to MtbGpgP: PhoE that functions as a monomer (29,30). Although intriguing, it is not completely surprising for a phosphatase to be catalytically competent only when it is in its dimeric state. For example, the human prostatic acidic phosphatase (EC 3.1.3.2; Protein Data Bank code 1CVI) functions as a dimer. The structures of MtbGpgP described here reveal that the dimer interface is located far away from the bound ligands, and therefore it may not contribute residues directly for catalysis. To find out why monomeric MtbGpgP could not catalyze dephosphorylation, we examined the region around the dimer interface. A close inspection revealed intermolecular ionic interactions between amino acids from ␤5 of one monomer and loop 5 of another monomer. Incidentally, strand ␤5 is part of the dimer interface, whereas loop 5 is a long loop that partially covers the active site. Two intermolecular interactions-the salt bridge between Glu 111 and Arg 206 and stacking of guanidium groups of Arg 110 and Arg 208 against each other-probably play a role in localizing loop 5 such that it permits nonintrusive entry of the substrate into the active site (Fig. 5A). In addition, Arg 208 from one monomer might assist in docking the substrate into the active site of another monomer (Fig. 5A). Although these observations help partially explain the need of dimerization for enzymatic activity, a structural view of MtbGpgP in complex with GPG is likely to further clarify the requirement of dimerization for enzymatic activity. GpgPs are grouped under halo acid dehalogenase-like hydrolase superfamily because they harbor a conserved characteristic DDDD sequence (36,37). However, GpgP from Mtb is an exception to this grouping because it carries the RHG motif, a hallmark of histidine phosphatases (14). Because PGMs also exhibit RHG motifs, Rv2419c (MtbGpgP) was erroneously annotated as a PGM earlier. Interestingly, Rv2419c does exhibit low PGM activity. However, the phosphatase activity against GPG is much higher comparatively. In this context, MtbGpgP shows some promiscuity (14). Low dephosphorylation activities of MtbGpgP have been reported against mannosyl-3phosphoglycerate, mannosylglucosyl-3-phosphoglycerate, and pNPP (14). The specific activities for these substrates are at least 10-fold less than those for GPG. Examination of the active sites of MtbGpgP structures reveals that the pocket can be extended to accommodate larger substrates. Especially displacement of Loop 2 could make room for larger substrates like mannosyl-3-phosphoglycerate and mannosylglucosyl-3-phosphoglycerate to dock into the active site. In contrast to MtbGpgP, PhoE is a highly promiscuous phosphatase (29,30). Intriguingly, the apo-and ligandbound crystal structures of PhoE reveal no movement of the loop region equivalent to the Loop 2 of MtbGpgP. Dynamics simulations analysis, however, did suggest the presence of flexible regions around the active site that could explain the substrate promiscuity of PhoE (29,30). Key catalytic residues of MtbGpgP like His 11 and Glu 84 involved in proton transfers during dephosphorylation are strictly conserved in PhoE. Other essential residues of Mtb-GpgP like Arg 10 , Arg 60 , and His 159 are also strictly conserved in PhoE. Analysis of conservation based on alignment of primary sequences of members belonging to PGM family coupled with structural view of PhoE helped identify Gln 22 as a signpost for phosphatase activity (29,30). In agreement with the analysis, GpgP has Gln 23 located at a structurally identical position. However, differences are observed in the composition of amino acids around Gln 23 (Fig. 5B). This is not surprising because Gln 23 is part of loop 2 (Arg 10 -Ser 31 ), which plays a role in recognition of substrates, and both of these enzymes have different substrate specificities. In particular, Asp 15 , Gly 19 , and Ser 20 of GpgP are replaced by lysine, glutamate, and arginine in PhoE, respectively, imparting different substrate specificities. Con-Surf analysis of MtbGpgP (38) for structural conservation revealed that the region encompassing the active site is highly conserved, highlighting an evolutionarily conserved mechanism of catalysis (Fig. 5B). In contrast, the region in vicinity of the active site shows less conservation, indicating a role for this region in imparting substrate specificity. The crystal structures of MtbGpgP described here unveil subtle structural maneuvers of the enzyme during catalysis. In particular, the movement of loop 2 is conspicuous in all the three structures. In its apo-enzyme form, the loop is partially covering the active site, conceivably to occlude nonsubstrate ligands. Once there is recognition, possibly by side chains of Asn 17 and Gln 23 , the loop permits entry of the substrate into the active site. Amino acids like Arg 10 , Met 22 , Glu 84 , Trp 90 , His 95 , Trp 109 , Arg 123 , and Asn 186 lining the active site probably assist in orienting and positioning the substrate optimally for catalysis. During catalysis, loop 2 covers the active site. This is clearly observed in the structure of the enzyme covalently bound with vanadate mimicking the transition state intermediate. Here, part of the loop assumes a one-turn helical conformation, resulting in the side chain of Asn 17 moving inwards by 8.0 Å and rotating by 110°to make contact with the ligand. Once the catalysis is over, loop 2 moves away from the active site, permitting product release. This movement of loop 2 can be visualized by comparing the structures of vanadate-bound and PO 4bound MtbGpgP, depicting the transition state and product release states of the enzyme during catalysis, respectively. The side chain of Asn 17 has moved more than 13.5 Å away and rotated by 180°from its position observed in the vanadatebound structure (Fig. 4B). Thus, pivotal maneuvers of loop 2, as depicted in a model shown in Fig. 5C, probably assist in achieving substrate specificity and catalysis. The structures of MtbGpgP reported here are the first for a GpgP belonging to the histidine phosphatase family. Because the RHG motif is highly conserved, the overall mechanism of catalysis of MtbGpgP is likely to be similar to other histidine phosphatases (31, 39 -41). As proposed earlier for the structural homologue of MtbGpgP, PhoE (29), catalysis proceeds in two steps. In the first step, the phosphate group is transferred from glucosyl-3-phosphoglycerate to His 11 of MtbGpgP. The NE1 nitrogen atom of His 11 mounts a nucleophilic attack on the phosphorous atom of the phosphate moiety. A proton is shuttled from the carboxyl oxygen of Glu 84 to the leaving group (Fig. 5D). This results in transfer of the phosphate group to His 11 of MtbGpgP and release of the glucosylglycerate from the enzyme. Vanadate covalently linked to His 11 in the MtbGpgP-VO 3 structure mimics this phosphohistidine reaction intermediate. The excess charge on the phosphohistidine is probably stabilized by interactions with side chains of Arg 10 , Asn 17 , Gln 23 , Arg 60 , and His 159 and the backbone amide nitrogen of Gly 160 . In the second half of the reaction, a water molecule activated by Glu 84 mounts a nucleophilic attack on the phosphorous atom of the phosphohistidine (Fig. 5D). Here, the proton is returned back to Glu 84 , which can now serve as a proton donor again in the next cycle of catalysis. Numerous solvent molecules are observed around the vanadate and phosphate moieties in the structures of MtbGpgP, supporting such a role for water in catalysis. The nucleophilic attack by water results in release of orthophosphate and completion of the reaction (Fig. 5D). The GG thus formed is the substrate for di-glucosyl glycerate synthase that catalyzes the third step of the pathway leading to the biosynthesis of MGLPs. The importance of MGLPs in the physiology of Mtb is highlighted by the fact that transposon-mediated mutagenesis has identified at least six essential genes: Rv3030, Rv3032, Rv1208, Rv0127, Rv1326c, and Rv1327c, that are likely to participate in the biosynthesis of MGLPs (42)(43)(44)(45)(46)(47). GpgP (Rv2419c) catalyzes the second step of the pathway leading to the biosynthesis of MGLPs. Therefore, inhibitors targeting this enzyme could potentially aid in the elimination of the pathogen. In this context, the structures of MtbGpgP reported here could be invaluable for designing inhibitors of the enzyme with high specificity and potency.
6,573
2014-06-09T00:00:00.000
[ "Biology", "Chemistry" ]
Stable exponential cosmological solutions with three different Hubble-like parameters in EGB model with a (cid:2) -term We consider a D -dimensional Einstein-Gauss-Bonnet model with a cosmological term (cid:2) and two non-zero constants: α 1 and α 2 . We restrict the metrics to be diagonal ones and study a class of solutions with exponential time dependence of three scale factors, governed by three non-coinciding Hubble-like parameters: H (cid:2)= 0, h 1 and h 2 , obeying mH + k 1 h 1 + k 2 h 2 (cid:2)= 0 and corresponding to factor spaces of dimensions m > 1, k 1 > 1 and k 2 > 1, respectively ( D = 1 + m + k 1 + k 2 ). We analyse two cases: i) m < k 1 < k 2 and ii) 1 < k 1 = k 2 = k , k (cid:2)= m . We show that in both cases the solutions exist if α = α 2 /α 1 > 0 and α(cid:2) > 0 satisfies certain restrictions, e.g. upper and lower bounds. In case ii) explicit relations for exact solutions are found. In both cases the subclasses of stable and non-stable solutions are singled out. For m > 3 the case i) contains a subclass of solutions describing an exponential expansion of 3-dimensional subspace Introduction In this paper we consider D-dimensional Einstein-Gauss-Bonnet (EGB) model with a -term. To some extent this model is unique among the other higher-dimensional extensions of General Relativity (GR) with second order in curvature terms. The reason is the following one: the equations of motion for this model are of the second order (in derivatives) like it takes place in the Einstein gravity. It is well known that the so-called Gauss-Bonnet term appeared in (super)string theory as a first order correction (in α ) to the (super)string effective action (e.g. heterotic one) [1][2][3][4]. a e-mail<EMAIL_ADDRESS>(corresponding author) Currently, EGB gravitational model in diverse dimensions and its modifications, see and Refs. therein, are rather popular objects for studying in cosmology. They are used for possible explanation of accelerating expansion of the Universe (i.e. solving the dark energy problem), which follow from supernova (type Ia) observational data [31][32][33]. One may expect that the second order form of the equations of motion for these models will lead us to solutions which are in some sense close to those coming from GR and its higher dimensional extensions (e.g. avoiding the ghosts branches at least). The D-dimensional EGB model is a particular case of the Lovelock model [34]. The equations of motion for the Lovelock model have also at most second order derivatives of the metric (as it takes place in GR). We note that at present there exist several modifications of Einstein and EGB actions which correspond to F(R), R + f (G), f (R, G) theories (e.g. for D = 4), where R is scalar curvature and G is Gauss-Bonnet term. These modifications are under intensive studying devoted to cosmological, astrophysical and other applications, see [28][29][30] and references therein. In this paper we restrict ourselves to diagonal metrics and study (mainly) a class of cosmological solutions with exponential time dependence of three scale factors, governed by three non-coinciding Hubble-like parameters: H = 0, h 1 and h 2 , corresponding to factor spaces of dimensions m > 1, k 1 > 1 and k 2 > 1, respectively, with a restriction imposed: S 1 = m H + k 1 h 1 + k 2 h 2 = 0, and D = 1 + m + k 1 + k 2 . This restriction forbids the solutions with constant volume factor. We note that in generic anisotropic case with Hubblelike parameters h 1 , . . . , h n obeying S 1 = n i=1 h i = 0 (n = D − 1) the number of different real numbers among h 1 , . . . , h n should not exceed 3 [25]. Here we study two cases: i) m < k 1 < k 2 and ii) 1 < k 1 = k 2 = k, k = m. We show that in both cases the solutions exist only if α = α 2 /α 1 > 0, > 0 and B i e 2v i t dy i ⊗ dy i , (2.3) where B i > 0 are arbitrary constants, i = 1, . . . , n, and M 1 , . . . , M n are one-dimensional manifolds (either R or S 1 ) and n > 3. The equations of motion for the action (2.1) give us the set of polynomial equations [23] Here are, respectively, the components of two metrics on R n [16,17]. The first one is a 2-metric and the second one is a Finslerian 4-metric. For n > 3 we get a set of forth-order polynomial equations. We note that for = 0 and n > 3 the set of equations (2.4) and (2.5) has an isotropic solution v 1 = · · · = v n = H only if α < 0 [16,17]. This solution was generalized in [19] to the case = 0. It was shown in [16,17] that there are no more than three different numbers among v 1 , . . . , v n when = 0. This is valid also for = 0 if n i=1 v i = 0 [25]. Here we consider a class of solutions to the set of equations (2.4), (2.5) of the following form: where H is the Hubble-like parameter corresponding to an mdimensional factor space with m > 1, h 1 is the Hubble-like parameter corresponding to an k 1 -dimensional factor space with k 1 > 1 and h 2 is the Hubble-like parameter corresponding to an k 2 -dimensional factor space with k 2 > 1. In Sect. 6 we split the m-dimensional factor space for m > 3 into the product of two subspaces of dimensions 3 and m − 3, respectively. The first one is identified with "our" 3d space while the second one is considered as a subspace of (m − 3 + k 1 + k 2 )dimensional internal space. .2) M 4 = · · · = M n = S 1 and we set the internal scale factors corresponding to the present time t 0 : a j (t 0 ) = (B k ) 1/2 ex p(v j t 0 ), k = 4, . . . , n, (see (2.3)) to be small enough in comparison with the scale factor of "our" space for t = t 0 : We consider the ansatz (2.7) with three Hubble-like parameters H , h 1 and h 2 which obey the following restrictions: In Ref. [26] the set of (n + 1) polynomial equations (2.4), (2.5) under ansatz (2.7) and restrictions (2.8) imposed was reduced to a set of three polynomial equations (of fourth, second and first orders, respectively) where E is defined in (2.4) and where here and in what follows This reduction is a special case of a more general prescription (Chirkov-Pavluchenko-Toporensky trick) from Ref. [20]. Moreover, it was shown in Ref. [26] that the following relations take place Let us denote Then restrictions (2.8) read (2.17) Here we should exclude from our consideration the case Indeed, for m = k 1 = k 2 > 1 we get from restriction (2.17): 1+x 1 +x 2 = 0, while (2.18) gives us the relation 1+x 1 +x 2 = 0, which is incompatible with the previous one. We get from (2.10) and (2.12) that where We note that relation (2.20) is obeyed for αP < 0. Let us prove that Indeed, using relation (2.18), or m+k 1 x 1 +k 2 x 2 = 1+x 1 +x 2 , we get Hence, the solutions under consideration take place only if α > 0. (2.24) The calculations gives us the following relation for the vector v from (2.7) (2.26) Eur. Phys. J. C (2020) 80:543 This may be obtained by using the relation from Ref. [17] where and Here we use the notation 31) or, equivalently, (2.32) Thus, we are led to polynomial equation in variables x 1 , x 2 of fourth order or less (depending upon λ). We call relations (2.18), (2.32), as a master equations. The set of these equations may solved in radicals. Indeed, solving eq. (2.18) and substituting into eq. (2.32) we obtain another (master) which is of fourth order or less depending upon the value of λ. It may solved in radicals for all m > 1, k 1 > 1 and k 2 > 1. Here we do not try to write the explicit solution for general setup. It seems more effective for any given dimensions m, k 1 and k 2 to find the solutions just by using Maple or Mathematica. An example of solution with k 1 = k 2 will be considered below. In what follows we use the identity following from (2.23) and (2.33). The case k 1 = k 2 Here we put the following restriction k 1 = k 2 . We write relation (2.31) as Using relation (2.33) we rewrite the restrictions (2.17) (respectively) as follows Extremum points The calculations give us and . Thus, the points of extremum of the function f (x 1 ) are excluded from our consideration due to restrictions (2.8). For the values 11) We note that (3.19) which are valid for natural numbers m, l, k obeying: m > 1, l > 1, k > 1 and either m = l, or m = k, or l = k. This is proved in "Appendix". We also note that the following symmetry identities take place for the functions λ i (m, k 1 , k 2 ), i = 1, 2, 3, The function λ 4 (m, k 1 , k 2 ) is symmetric with respect to variables since the functions v(m, k 1 , k 2 ) and w(m, k 1 , k 2 ) are symmetric. For . We find that (in all cases) the function λ = f (x 1 ) is monotonically increasing in the interval (X 1 = 1, +∞) from λ 1 to λ ∞ and it is monotonically decreasing in the interval (X 3 , X 1 ) from λ 3 to λ 1 . In the case (A + ) the function λ = f (x 1 ) is monotonically increasing in the intervals (−∞, X 4 ) and (X 2 , X 3 ) from λ ∞ to λ 4 and from λ 2 to λ 3 , respectively, while it is monotonically decreasing in the interval (X 4 , X 2 ) from λ 4 to λ 2 (see Fig. 1). In this case the points X 1 and X 2 are points of local minimum and points X 3 and X 4 are points of local maximum. For the case (A − ) the function λ = f (x 1 ) is monotonically increasing in the intervals (−∞, X 2 ) and (X 4 , X 3 ) from λ ∞ to λ 2 and from λ 4 to λ 3 , respectively, while it is monotonically decreasing in the interval (X 2 , X 4 ) from λ 2 to λ 4 (see Fig. 2). The points X 1 and X 4 are points of local minimum and points X 2 and X 3 are points of local maximum. In this case λ 2 > λ ∞ . In the case (A 0 ) the function λ = f (x 1 ) is monotonically increasing in the intervals (−∞, X 3 ) from λ ∞ to λ 3 , respectively (see Fig. 3). For this case the point X 1 is the point of local minimum, the point X 3 is a point of local maximum and the point X 2 = X 4 is a point of inflection. Using the inequalities (3.38), (3.39) and (3.51) we get from the behaviour of the function f (x 1 ) mentioned above that X 3 is the point of absolute maximum and X 1 is the point of absolute minimum, i.e. for all x 1 ∈ R. Due to (3.2) the points X 1 , X 2 , X 3 , X 4 are forbidden for our consideration. We get for all x 1 = X 1 , X 2 , X 3 , X 4 . Let us denote the set of definition of the fuction f for our consideration (−∞, ∞) * ≡ {x|x ∈ R, x = X 1 , X 2 , X 3 , X 4 }. Since the function f (x 1 ) is continuous one the image of the function f (due to interme- Thus, we a led the following proposition. The case H = 0. It may verified that in the case H = 0 the solutions under consideration take place only if α > 0, and where . These relations imply α > 0 and (3.59) The substitution of these values of h 1 and h 2 , and H = 0 into equation (2.9) gives us (due to (2.25) and (2.26)) relation (3.57). The case k 1 = k 2 Here we consider the case m > 1, k 1 = k 2 = k > 1 and H = 0. We get from (2.18) In this case relation (2.23) implies The solutions under consideration take place for and α > 0 (see Sect. 2). Let us denote α > 0. It follows from (2.20) Due to (4.4) we have The substitution of relations (4.1), (4.2) into formulae (2.29), (2.30) gives us Using (4.5) we rewrite relation (2.31) as This relation may be written as quadratic relation where where Proof For m > k we have a sum of two positive terms in (4.15) and hence F > 0 in this case. For k > m, we denote k = m + p, p > 0. We obtain Due to m > 1 and p > 0 we have a sum of three positive terms in (4.17) and hence F > 0 for k > m. The solution to eq. (4.10) reads We are seeking real soutions which obey two restrictions Here the case D = 0 is excluded from the consideration since as it will be shown later it implies either x 1 = 1 or x 2 = 1, which contradict restrictions (2.17). The inequality (4.19) may be rewritten as λ < λ 1 for m > k, (4.21) λ > λ 1 for m < k, (4.22) where For definition of λ 1 (m, k, l) see (3.9). The set of two equations (4.1) and (4.2) have the following solutions where ε 2 = ±1 and . (4.28) Now we explain why the case D = 0 was excluded from our consideration. Let us put D = 0. Then we get from (4.18) and hence which implies either x 2 = 1 for ε 2 = 1 or x 1 = 1 for ε 2 = −1. But this is forbiden by first two inequalities in (2.17). Moreover, it is not difficult to verify that relations (4.24), (4.25) and (4.28) imply all four inequalities in (2.17). Indeed, the violation of first two inequalities in (2.17) lead us either to x 1 = 1 or x 2 = 1 which may be valid only for E from (4.30) and ε 2 = −1 or ε 2 = 1, respectively. But due to definition (4.26), relation (4.30) implies (4.29) and hence D = 0, which contradict to relations (4.24), (4.25). The violation of the third inequality gives us x 1 = x 2 which imply E = 0, but this is forbidden by (4.28). Now, let us verify the last inequality in (2.17). In our case it reads From (4.24), (4.25) we obtain The relation is (4.31) is satisfied due to (4.32) and m = k. Now we analyse the inequalities in (4.28). We introduce new parameter ε 1 =ε 1 sign(m − k). (4.33) Then relation (4.18) reads as follows , (4.34) Let us consider the case ε 1 = −1. The second inequality in (4.28) X < Using the definition of D in (4.14) we obtain Relations (4.36) read as follows where It may be verified that where λ ∞ (k, l) is defined in (3.22). Using (4.23) and (4.40) we rewrite relations (4.37), (4.38) as follows (4.42) Now, we put ε 1 = 1. The inequality X > 0 is satisfied in this case. We should treat the inequality X < (4.44) Relations (4.44) read as follows It may be verified that where λ 3 (m, k, l) is defined in (3.11). Using (4.23) and (4.48) we rewrite relations (4.45), (4.46) as follows (4.50) We note that that for m < k (it proved in the previous section), while for m < k and 54) where λ 1 = λ 1 (k, k), λ 3 = λ 3 (k, k) are defined in (3.9) and (3.11 The restrictions on λ for our solution may be explained just graphically as it was done in the previous section for k 1 Here x 2 (x 1 ) = − m−1 k−1 − x 1 and restrictions (2.17) reads as follows The fourth inequality in (2.17) is obeyed identically (it was checked above). The points X 1 , X 2 , X 3 are points of extremum of the function f (x 1 ). They are excluded from our consideration due to restrictions (4.57). The function f (x 1 ) tends to λ ∞ as x 1 tends to ±∞. For 1 < k < m the function has two points of maximum at X 1 and X 2 with The graphical representation of f (x 1 ) for m = 5 and k = 4 is depicted at Fig. 5. The analysis of stability Here we study the stability of the solutions under consideration by using the results of refs. [23,25,26]. We put the restriction det(L i j (v)) = 0 (5.1) on the matrix We remind that for general cosmological setup with the metric we have the set of equations [23] where h i =β i , and it is unstable if (and only if) In order to study the stability of solutions we should verify the relation (5.1) for the solutions under consideration. This verification was done (in fact) in Ref. [26]. The proof of Ref. [26] is based on first three relations in (2.8) and inequalities k 1 > 1, k 2 > 1 and m > 1. We note the relation (2.14) was also used in this proof. Thus, the any solution under consideration is stable when relation (5.8) is obeyed while it is unstable when relation (5.9) is satified. and it is unstable if and only if H The exact solutions obtained in this section obey first three relations in (2.8) (since x 1 = 1, x 2 = 1 and x 1 = x 2 ) and hence the key restriction (5.1) is satisfied. The stability condition (5.8) in this case reads as, For H > 0 (or ε 0 = 1, see (4.6)) our special solutions are stable for k > m and they are unstable for k < m. For H < 0 (or ε 0 = −1) the solutions are stable for k < m and they are unstable for k > m. The case H = 0. Let us consider the solutions with H = 0 and h 1 , h 2 from (3.58), (3.59), which are valid for k 1 = k 2 , α > 0 and from (3.57). Here k 1 > 1 and k 2 > 1. We obtain where ± is sign parameter in (3.58), (3.59). It follows from our analysis above that the solution with ±(k 2 − k 1 ) > 0 is stable. This takes place when either k 2 > k 1 and the sign "+ is chosen in (3.58) and (3.59), or if k 2 < k 1 and the sign "− is selected. For ±(k 2 − k 1 ) < 0 the solution is unstable. Here the restriction m > 1 (which is used for the proof of (5.1)) is also assumed. Solutions corresponding to zero variation of G Here we consider the special solutions to equations (2.9), (2.10), (2.11) with H > 0, 3 < m < k 1 < k 2 [26] (for m = 3 see [36]) Here 6 and = (m, k 1 , k 2 ), (6.4) where (6.5) These solutions describe accelerated exponential expansion of "our" 3d subspace and constant internal space volume factor, or zero variation of the effective gravitational constant (in Jordan frame) obeying the most stringent limitation on G-dot obtained by the set of ephemerides [37], when the following splitting of the Hubble-like parameters is keeping in ). (6.6) It follows from Proposition 1 that (m, k 1 , k 2 ) > 0. Moreover, in this case we have Due to graphical analysis from Sect. 3 we get from (6.7) the following bounds Remark It may be also shown that the effective gravitational constant G (in Jordan frame), calculated for our solutions, obeys the limitation on G-dot from Ref. [37], when belongs to some vicinity of (m, k 1 , k 2 ), i.e. | − (m, k 1 , k 2 )| < δ for some (small enough) δ > 0. Hubble-like parameters vs. constants of the model The initial contants of the model are α 1 = 0, α 2 = 0 and . The solutions for Hubble-like parameters H = 0, h 1 and h 2 , which were analyzed above, depend upon α = α 2 /α 1 > 0 and λ = α. In this section we consider for simplicity the generic case H = 0. The parameter α has the dimension of L 2 (L is a length), while λ is dimensionless one. Here we discuss the existence of certain combinations of Hubble-like parameters, which either do not depend upon the parameters (or constants) of the model, i.e. α and λ, or depend only upon one of these constants. Such combinations (or functions) of H = 0, h 1 and h 2 do exist. Indeed, it follows from (2.11) that the Hubble-like parameters for the solutions under consideration obey the following identity The third basic relation is just (3.1) which we rewrite here as where f (x 1 ) is the rational function defined in (3.1). Fig. 6 The graphical representation (in Hubble-like variables H, h 1 , h 2 ) of intersection of plane (see (7.1)) and ellipsoid (see (7.2)) for m = 3, k 1 = 4, k 2 = 5 and α = 1 In the 3d space of Hubble-like parameters H, h 1 , h 2 , relation (7.1) describes a plane while (7.2) corresponds to an ellipsoid. The intersection of this plane and ellipsoid gives us an ellipse E. For m = 3, k 1 = 4, k 2 = 5 and α = 1 this intersection is depicted at Fig. 6. For H = 0 and m < k 1 < k 2 the solutions for (H, h 1 , h 2 ) are described by 1-dimensional mani- correspond to H > 0 and relations h 1 /H = X 1 , h 2 /H = X 2 , h 3 /H 3 = X 3 , h 4 /H 4 = X 4 , respectively (see (3.3), (3.4), (3.5), (3.6)). Thus, the manifold E sol is an 1dimensional manifold, which is obtained from the ellipse E by deleting 10 points. It is a disjoint union of ten arcs. Any of these arcs is parametrized by the pair (λ, s), where s is the number of the arc and λ is local coordinate given by (7.3). Analogous consideration may be done for the case m = k 1 = k 2 : in this case one should delete 8 points from E to obtain E sol . Conclusions We have considered the D-dimensional Einstein-Gauss-Bonnet (EGB) model with a -term (or EGB model) and two (non-zero) constants α 1 and α 2 . The metric was chosen to be diagonal "cosmological" one. Here we were dealing (mainly) with a class of solutions with exponential time dependence of three scale factors, governed by three noncoinciding Hubble-like parameters H = 0, h 1 and h 2 , corresponding to factor spaces of dimensions m > 1, k 1 > 1 and k 2 > 1, respectively, with the restriction imposed: We have studied the solutions in two cases: i) m < k 1 < k 2 and ii) 1 < k 1 = k 2 = k = m. (The solutions under consideration with k 1 = k 2 = m are absent.) We have shown that in both cases the solutions exist only if: α = α 2 /α 1 > 0, λ = α > 0 and the dimensionless parameter of the model λ obeys certain restrictions, e.g. upper and lower bounds depending upon m, k 1 and k 2 (see Proposition 1). In the case ii) we have found explicit exact solutions (see Proposition 2). Our consideration used the so-called Chirkov-Pavluchenko-Toporensky splitting trick from Ref. [20] (see also [26]) which allowed us to reduce the problem under consideration to master equation λ = f (x 1 ) (2.31), where x 1 = h 1 /H . This master equation is equivalent to polynomial equation (2.34) for x 1 which is of fourth order (in generic case) or less depending upon λ. Thus, the master equation may be solved in radicals for all m > 1, k 1 > 1 and k 2 > 1. Our restrictions on λ were obtained by analysing the equation λ = f (x 1 ) with the use of the formulas for the derivative d f/dx 1 , i.e. (3.7) and (4.55) in cases i) and ii), respectively. In the case i) m < k 1 < k 2 the extremum points of the function f (x 1 ) are just four non-coinciding points: X 1 , X 2 , X 3 , X 4 (see (3.3), (3.4), (3.5), (3.6)) which are exactly four values of x 1 forbidden by restrictions H = h 1 , H = h 2 , h 1 = h 2 , S 1 = m H + k 1 h 1 + k 2 h 2 = 0, respectively. In the case ii) 1 < k 1 = k 2 = m we have three forbidden points: The stability of the solutions (as t → +∞) in a class of cosmological solutions with diagonal metrics was analyzed for both cases ((i) and (ii)) and subclasses of stable and non-stable solutions were singled out. We have proved that in the case i) the solutions with H > 0 are stable for x 1 = h 1 /H > X 4 = m−k 2 k 2 −k 1 and unstable for x 1 < X 4 (see Proposition 3). It was proved that in the case ii) the solutions with H > 0 are stable for k > m and unstable for k < m (see Proposition 4). The stability conditions for H < 0 are equivalent to instability conditions for H > 0 and vice versa. The solutions of first class i) contains a subclass of stable solutions describing an exponential expansion of 3-dimensional subspace with Hubble-like parameter H > 0 and zero vari-ation of the effective gravitational constant G (in the Jordan frame) [26] (see Sect. 6). Some of the results obtained in this paper may be considered as non-trivial and unexpected ones. Indeed, let us compare the solutions governed by three different Hubblelike parameters H > 0, h 1 , h 2 with the solutions from Ref. [27] obtained for two non-coinciding Hubble-like parameters H > 0 and h corresponding to factor spaces of dimensions m > 2 and l > 2 with m H + lh = 0. Here we have found that our solutions take place only for α > 0 and > 0, while in the case of Ref. [27] we have two branches with (a) α > 0, −∞ < α < λ + (m, l) and (b) α < 0, |α| > λ − (m, l), where λ ± (m, l) > 0. The solutions from Ref. [27] with α > 0 exist for any ∈ (−∞, 0], while in our case such solutions are absent. We note that the absence of solutions for = 0 may be considered as a special non-trivial result. For two different Hubble parameters such solutions (with = 0 and α > 0) were described in Ref. [38]. As it is proved here, in the case of three Hubble-like parameters (with the restrictions imposed above) the allowed gap for is bounded (at the top and the bottom). Here we have also considered (for a completeness) the case H = 0 and have found that the solutions exist only for k 1 = k 2 , α > 0 and fixed value of > 0 from (3.57). In this case we have two opposite in sign solutions for (h 1 , h 2 ) with one solution being stable and the second one -unstable. For possible physical (e.g. cosmological) applications one may keep in mind a dimensional reduction of the model under consideration to d = 4 which lead us to 4d Horndeski type model with a set of scalar fields. In this case one will obtain (1 + 3)-dimensional inflationary (cosmological) solution with Hubble parameter H > 0 and several scalar fields (coming from scale factors) with linear dependence upon the time variable (governed by h 1 and h 2 ). The effective cosmological term 0 = 3H 2 will have a nontrivial dependence upon the "bare" multidimensional cosmological constant , the dimensions of factor spaces m, k 1 and k 2 and the parameter α (for any root of polynomial equation for x 1 ).
7,093.8
2019-06-25T00:00:00.000
[ "Mathematics", "Physics" ]
Search for nonstandard neutrino interactions with IceCube DeepCore As atmospheric neutrinos propagate through the Earth, vacuumlike oscillations are modified by Standard Model neutral-and charged-current interactions with electrons. Theories beyond the Standard Model introduce heavy, TeV-scale bosons that can produce nonstandard neutrino interactions. These additional interactions may modify the Standard Model matter effect producing a measurable deviation from the prediction for atmospheric neutrino oscillations. The result described in this paper constrains nonstandard interaction parameters, building upon a previous analysis of atmospheric muon-neutrino disappearance with three years of IceCube DeepCore data. The best fit for the muon to tau flavor changing term is ϵ μτ ¼ − 0 . 0005 , with a 90% C.L. allowed range of − 0 . 0067 < ϵ μτ < 0 . 0081 . This result is more restrictive than recent limits from other experiments for ϵ μτ . Furthermore, our result is complementary to a recent constraint on ϵ μτ using another publicly available IceCube high-energy event selection. Together, they constitute the world ’ s best limits on nonstandard interactions in the μ − τ sector. As atmospheric neutrinos propagate through the Earth, vacuum-like oscillations are modified by Standard-Model neutral-and charged-current interactions with electrons.Theories beyond the Standard Model introduce heavy, TeV-scale bosons that can produce nonstandard neutrino interactions.These additional interactions may modify the Standard Model matter effect producing a measurable deviation from the prediction for atmospheric neutrino oscillations.The result described in this paper constrains nonstandard interaction parameters, building upon a previous analysis of atmospheric muon-neutrino disappearance with three years of IceCube-DeepCore data.The best fit for the muon to tau flavor changing term is µτ = −0.0005,with a 90% C.L. allowed range of −0.0067 < µτ < 0.0081.This result is more restrictive than recent limits from other experiments for µτ .Furthermore, our result is complementary to a recent constraint on µτ using another publicly available IceCube high-energy event selection.Together, they constitute the world's best limits on nonstandard interactions in the µ − τ sector. I. INTRODUCTION Neutrino flavor change has been observed and confirmed by a plethora of experiments involving solar, atmospheric, reactor, and accelerator-made neutrinos; see [1][2][3] and references therein.This phenomenon, also known as neutrino oscillations due to its periodic behavior, implies that at least two of the Standard Model (SM) neutrinos have a nonzero mass, making this the first established deviation from the SM.The massive threeneutrino model has been very successful in explaining the neutrino data with two mass differences, known as the solar squared-mass difference (∆m 2 21 ≈ 7.5 × 10 −5 eV 2 ) and the atmospheric squared-mass difference (|∆m 2 23 | ≈ 2.5 × 10 −3 eV 2 ) [1,2].This information, along with the fact that experiments pursuing direct neutrino mass measurements have yielded only upper limits [3], leads to the conclusion that neutrinos have masses that are at least six orders of magnitude smaller than those of the charged leptons.Whether these small masses are generated also by the Higgs mechanism, implying the existence of non-interacting right-handed states, or by a different yet-unknown mechanism remains an open question. Many extensions to the SM that incorporate small neutrino masses have been put forward.A subset that addresses small neutrino masses and, at the same time, unifies the electroweak and strong forces is called "Grand Unified Theories" (GUTs).Some of these GUT models predict the existence of heavy TeV-scale bosons [4].Searches for direct evidence of these particles have been performed by experiments at the Large Hadron Collider.To date, no evidence has been observed [5,6].In this paper, we address these predictions through a complementary search in the neutrino sector, seeking evidence for new flavor-changing neutrino interactions produced by TeV-scale bosons [7][8][9][10][11][12]. Nonstandard interactions (NSIs) will introduce modifications of the SM potential, which is relevant for matter effects in neutrino flavor oscillations.The effect of the NSI is expected to grow with distance travelled through matter and becomes more relevant as the neutrino energy increases.As a result, the flux of atmospheric neutrinos detected by the IceCube Neutrino Observatory at the South Pole is ideal for such a study [9,13].In the analysis presented here, we use the data set from [14], which contains multi-GeV atmospheric neutrinos that traverse large fractions of the Earth before reaching the IceCube detector.Because the neutrino production is predominantly from pion and kaon decays, the neutrino flux has well-understood initial flavor ratios [15,16]. Current bounds on NSI are reported in [17][18][19], and current reviews are given in [20][21][22][23].In fact, independent studies of high-energy atmospheric neutrinos using public IceCube data [24] as well as studies with public Super-Kamiokande data [25] have already been performed, obtaining strong constraints on NSI parameters.Regarding the latter, the Super-Kamiokande collaboration has also performed an analysis on NSI parameters [26].The Ice-Cube studies have so far only used high-energy public data sets, but no low-energy sets.This motivates the presented search, where we focus on the NSI parameter µτ , which modifies the ν µ → ν τ flavor transition. The rest of this paper is structured as follows.In section II, we review neutrino oscillations in matter.In section III, we describe the NSI theory used in this work.Then in section IV, we describe the IceCube experiment, and in V we discuss the main systematics of this analysis.Section VI contains the main results of this paper, and in section VII we conclude. II. MATTER EFFECTS IN NEUTRINO OSCILLATIONS Neutrinos are produced in flavor eigenstates, but travel as mass eigenstates, meaning that a certain flavor of neutrino produced at the source may later interact as a different flavor [27,28].At its simplest, when neutrinos travel through vacuum, the oscillation length is given by L osc = 2.5 km (E/GeV) ∆m 2 /eV 2 −1 [3]. Since neutrinos interact via neutral-and chargedcurrent weak interactions, neutrino oscillations are modified as matter is traversed.In particular, the propagating neutrino -which is a mixture of electron, muon, and tau flavors -will experience a flavor-dependent matter potential.The relevant potential difference is produced by charged-current coherent forward scattering from electrons in the Earth.We will refer to this as "matter effect," and it is closely related to the Mikheyev-Smirnov-Wolfenstein (MSW) effect [29,30] observed in solar neutrino experiments [31][32][33][34].Indications of matter effects [35,36] in Earth-based oscillation experiments can be extracted from global fits to long-baseline and atmospheric neutrino data sets [37]. III. NONSTANDARD NEUTRINO INTERACTIONS Nonstandard neutrino interactions can be modeled as an additional term in the neutrino Hamiltonian, similar to the conventional matter potential term.The latter effect is included in the neutrino Hamiltonian as a single potential, V CC , which modifies the flavor transition probabilities.The potential V CC is proportional to the Fermi coupling constant G f and the density of electrons n e , i.e., V CC = √ 2G f n e .Adding interactions with nonstandard bosons to the Hamiltonian takes a similar form, but with additional components.To consider all possible flavor-violating interactions, a term αβ (α, β = e, µ, τ ) scales all possible flavor-violating and conserving contributions.For definiteness, in this analysis, we consider nonstandard interactions between neutrinos and down quarks (other assumptions such as for up quarks can be approximated by rescaling our results).For this reason, a factor of n d = 3n e (to account for the fact that down-quarks are approximately three times as abundant as electrons in the Earth) was used instead of n e as in the case of the SM matter effect.The total Hamiltonian is then where E ν is the neutrino energy, U is the neutral lepton mixing matrix (also known as the Pontecorvo-Maki-Nakagawa-Sakata matrix [27,28]), M 2 is a diagonal matrix containing the square-mass differences, and the NSI strength matrix.Accordingly, the addition of the NSI terms amounts to introducing six additional effective parameters if one accounts for hermicity, unitarity constraints, and the possibility of making the Hamiltonian traceless without loss of generality; see [38].However, for experiments like Super-Kamiokande and IceCube, the terms that correspond to ν µ or ν τ interactions will dominate.This is because the atmospheric neutrino flux in the GeV energy range is dominated by ν µ , which primarily transform into ν τ as they travel through the Earth.[39,40].SM matter effects and NSI can be distinguished using the energy and arrival direction distributions of observed flavor-violating transitions.The neutrino flavor oscillations due to the well-established mass differences have been observed from atmospheric neutrinos predominately at energies initially below 10 GeV [41] and recently up to 56 GeV [14].The observation of atmospheric neutrino oscillations at two different energy ranges but at the same ratio of baseline to energy (L/E) tests the massive three neutrino paradigm and highlights the complementarity of neutrino experiments at different energy ranges.In contrast, the signal predicted for the dominant muonneutrino to tau-neutrino NSI, parametrized by the coupling µτ , has a smaller magnitude but can be seen over a larger range of energies, as shown in Fig. 1.Therefore, the optimal method for searching for an NSI signal due to µτ is to use a large range of neutrino energies, where one expects a combined effect of the NSI and oscillations in the low-energy region and an exclusively NSI signal in the high-energy region.In particular, we note that IceCube's range extends to higher energies than that of previous studies, thus giving us greater sensitivity. A study by Super-Kamiokande [26], using a twoneutrino approximation, focused on the NSI parameters = τ τ − µµ and µτ .Prior to works using IceCube data, this resulted in the world's best limit with µτ < 0.011 at 90% CL.As in the Super-Kamiokande study, we choose Muon neutrino (top) and antineutrino (bottom) survival probability at zenith angle cos θ = −1, corresponding to vertically up going neutrinos that traverse the entire diameter of the Earth, for global best-fit oscillations (solid) and µτ = 0.01 NSI, close to the current Super-Kamiokande limits (dashed) [26].NSI effects are visible in the full neutrino energy range of 10-1000 GeV. to only consider the dominant NSI terms, so the ν e terms are set to zero, and the hermiticity of is also assumed.Thus, the NSI sector reduces to a two by two matrix, so the CP-violating phase can be rephased, i.e., we assume µτ to be real.As can be seen in [21], the neutrino mass ordering is degenerate with the sign of µτ , and the muon neutrino survival probability is symmetric under sign change of .Given that is highly correlated with µτ in this analysis, we set to zero.Also, for definiteness, we assume normal ordering.Note that these degeneracies restrict the interpretation of our results [21,26,42,43]. IV. THE ICECUBE DETECTOR IceCube is a 1 km 3 neutrino detector [44][45][46] embedded in the ice at the South Pole; see Fig. consists of 86 strings, each with 60 10-inch photomultiplier tubes enclosed in glass spheres, called Digital Optical Modules (DOMs).Of those strings, 78 are separated by a distance of approximately 125 m, with DOMs on each string separated by 17 m.An additional infill extension, DeepCore [47], consists of 8 strings separated by about 75 m, with DOMs on each string separated by 7 m.Secondary particles produced when neutrinos interact in the ice, induce Cherenkov radiation, which is then detected by the DOMs.Muons produce distinctively long tracks.This topology can be reconstructed to determine the angle of the muon with a resolution of 12 • at 10 GeV, improving to 6 • at 40 GeV [14].The energy of the muon can be measured from its track length, while the energy of the hadronic shower produced in the neutrino interaction can be estimated from the total amount of light in the detector.Thus, the muon energy, estimated from the track length, added to the reconstructed shower energy is a proxy for the neutrino energy.The closely spaced DOMs of the DeepCore extension allow measuring the neutrino energy down to neutrino energies of about 5 GeV, with a median resolution of 30% at 8 GeV, which improves to 20% at 20 GeV [14].This analysis makes use of neutrinos that reach the detector from below the Earth's horizon.This serves two purposes: first it greatly diminishes atmospheric muon contamination and, second, it allows for large matter effects. A. Sensitivity and data set IceCube has measured neutrino oscillation parameters by searching for a deficit of neutrinos traveling through Earth and interacting in the detector.In IceCube, the ν µ disappearance probability peaks at ∼ 25 GeV for straight up going events, but the oscillation signal is measurable up to about 100 GeV, as shown in Fig. 1.In 2014, IceCube published the result of fitting 5174 events from three years of data taken with the complete Ice-Cube detector, obtaining three-neutrino oscillation parameters to a precision comparable with that from dedicated neutrino oscillation experiments [14].This study uses a three-neutrino formalism of the neutrino survival probabilities to calculate limits on the µτ parameter.We use the publicly available nuSQuIDS neutrino survival probability package [48,49], which has a robust implementation of NSI and uses a detailed Earth density profile [50].Simulated events are weighted with the Honda et al. atmospheric neutrino model [15], then are binned in an 8×8 matrix in reconstructed energy, from 6.3 GeV to 56.2 GeV, and zenith angle, from cos θ reco z = −1 to cos θ reco z = 0 (see Fig. 3).To determine the expected sensitivity for values of µτ in the range of the Super-Kamiokande limit, the total number of events expected with and without NSI effects were calculated as shown in Fig. 3. B. Systematic uncertainties Systematic uncertainties that we have included as nuisance parameters in the fit are: Oscillation parameters: simultaneously fit for the standard oscillation parameters sin 2 (θ 23 ) and ∆m 2 23 as nuisance parameters. Ice column scattering coefficient: scattering of light in the ice that formed within the hole after the DOMs were inserted [51].This ice contains bubbles that are not found in the bulk ice of the detector.The latter is well studied using flashers and well modeled.The additional bubbles increase the scattering of light, affecting the effective angular efficiency of our DOMs; see [51] for details. Optical efficiency: the uncertainty in the photon response of the optical modules due to many effects, including photocathode response and obscured regions due to cabling. Overall normalization (N ): parameter that scales the event rate expectation freely.This absorb overall normalization uncertainties due to absolute DOM efficiencie and total cosmic ray flux. Relative ν e to ν µ normalization (N e/µ ): relative normalization of the electron neutrinos to atmospheric muon neutrinos. Atmospheric muon fraction (R µ ): normalization of cosmic ray muons that pass the cuts.The distribution of this background was obtained using a data driven method [14].Spectral index (γ): the exponent describing the energy dependence of the incoming cosmic ray spectrum.This systematic in part accounts for uncertainties due to hadronization processes [52]. For a more detailed discussion of these systematic effects, see [53]. VI. RESULT In order to constrain the NSI parameter µτ , we employ the same data set and event selection in this analysis as was used in [14].This analysis has the same energy, zenith angle resolution, and systematic uncertainties as the analyses in [14,53] with an additional fiducial volume cut, resulting in a final sample to 4625 events [53].The data and Monte Carlo are in good agreement after the fit, as shown in Fig. 4. To determine the best-fit oscillation parameters, the simulated data distributions are compared to the data bin-by-bin.Minimizing the Poisson likelihood value of the data given the Monte Carlo, modified by the nuisance parameters (as described in [14,53]), determines the final best-fit parameters.The 90% confidence level limits are then calculated using the difference from the best-fit likelihood, assuming Wilks' theorem applies [54].To make the comparison to [24], we also calculate the credibility regions by integrating the profiled likelihood using a uniform prior on µτ and profiling over the nuisance parameters.This procedure is found to be in good agreement with the result obtained using Wilk's theorem. The resulting constraints on the NSI parameters are shown in Fig. 5, with the best-fit values for the systematic parameters shown in Table I.Priors on the atmospheric and detector nuisance parameters are the same as in [14].Furthermore, Fig. 6 shows the correlation between the fit parameters at the best fit of oscillation and nuisance parameters.The mass-squared difference ∆m Confidence limits from this analysis on the NSI parameter µτ using the event selection from [14,53] shown as solid vertical red lines.Similarly, dashed vertical red lines show the 90% credibility interval using a flat prior on µτ and where we have profiled over the nuisance parameters.The light blue vertical lines show the Super-Kamiokande 90% confidence limit [26].The light green lines show the 90% credibility region from [24].Finally, the horizontal dash-dot line indicates the value of −2∆LLH that corresponds to a 90% confidence interval according to Wilks' theorem. expected from existing correlations and degeneracies in the oscillation probability [38].Finally, the change in the oscillation parameters compared to [14] have been demonstrated to be caused by the additional cut on the fiducial volume.For this analysis, the best fit is at µτ = −0.0005.The 90% C.L. range is −0.0067 < µτ < 0.0081.This result is consistent with the Super-Kamiokande limits for µτ [26] and represents an independent determination of the parameter.To compare with this, in Fig. 5 we show the results from [24] obtained using public IceCube highenergy data.Fig. 1 shows that the signal for µτ is largest in the region above 100 GeV.A planned extension of this study including a sample of events above 100 GeV would significantly improve constraints on NSI parameters [13]. VII. CONCLUSIONS The existence of physics beyond the Standard Model has been suggested by the nonzero neutrino mass, in addition to the existence of dark matter.Extensions of the Standard Model that explain these observations could lead to a modified strength of neutrino interactions in standard matter.Experiments like IceCube have the potential to constrain these nonstandard interactions with greater precision than previous experiments. Our best fit of the NSI flavor-changing parameter yields µτ = −0.0005,with a 90% C.L. range of −0.0067 < µτ < 0.0081.This result is comparable to with a slight improvement over the Super-Kamiokande limits for µτ (| µτ | < 0.011 at 90% C.L.).A recent study [24] using IceCube public data obtained constraints which are slightly better than the ones shown in this paper.These constraints are also shown in Fig. 5 and are complementary to our result as they are affected by different systematics and make use of a different energy regime. FIG. 1.Muon neutrino (top) and antineutrino (bottom) survival probability at zenith angle cos θ = −1, corresponding to vertically up going neutrinos that traverse the entire diameter of the Earth, for global best-fit oscillations (solid) and µτ = 0.01 NSI, close to the current Super-Kamiokande limits (dashed)[26].NSI effects are visible in the full neutrino energy range of 10-1000 GeV. FIG. 2 . FIG. 2. Detector geometry: green circles represent IceCube strings and red ones deep core strings. FIG. 3 . FIG. 3. Expected pulls of predicted event numbers as a function of neutrino energy and zenith angle.The left (right) panel compares µτ =-0.01 ( µτ =0.01) to the standard neutrino oscillation matter effects (SI) expectation. FIG. 6 . FIG. 6. Correlation matrix of the nuisance and physics parameters considered in this analysis calculated at the maximum likelihood solution.The color scale show the correlation coefficient (ρ). TABLE I . List of the best-fit parameters obtained in this analysis.When priors are listed, they are Gaussian, and the width corresponds to the one sigma range.Values obtained at the best-fit point are also listed.
4,527.6
2017-09-20T00:00:00.000
[ "Physics" ]
A Framework for Detecting False Data Injection Attacks in Large-Scale Wireless Sensor Networks False data injection attacks (FDIAs) on sensor networks involve injecting deceptive or malicious data into the sensor readings that cause decision-makers to make incorrect decisions, leading to serious consequences. With the ever-increasing volume of data in large-scale sensor networks, detecting FDIAs in large-scale sensor networks becomes more challenging. In this paper, we propose a framework for the distributed detection of FDIAs in large-scale sensor networks. By extracting the spatiotemporal correlation information from sensor data, the large-scale sensors are categorized into multiple correlation groups. Within each correlation group, an autoregressive integrated moving average (ARIMA) is built to learn the temporal correlation of cross-correlation, and a consistency criterion is established to identify abnormal sensor nodes. The effectiveness of the proposed detection framework is validated based on a real dataset from the U.S. smart grid and simulated under both the simple FDIA and the stealthy FDIA strategies. Introduction Wireless sensor networks (WSNs) consist of spatially dispersed sensors connected via wireless communication protocols [1].These sensors are equipped with sensing capabilities to collect data on environmental parameters and physical quantities, which are transmitted to a central server or data center for further analysis and decision-making.WSNs are widely employed in various fields, including military affairs, agriculture, healthcare, industrial automation, and intelligent transportation [2]. Typically, the sensors in WSNs are resource-constrained devices in unprotected environments that are vulnerable to physical tampering [3][4][5].The behavior of an attacker who physically tampers with sensor data is known as a false data injection attack (FDIA).As a result of the FDIA, the tampered sensors provide misleading data to the central server, leading the system to make incorrect judgments.The FDIA undermines the authenticity of sensor data, which can seriously impact systems that rely on sensor data for decisionmaking or monitoring, culminating in economic loss or even a life crisis.As a result, it is critical to develop detection mechanisms to ensure that WSNs are resistant to FDIAs [6,7]. Motivation The focus of this paper is on detecting FDIAs in large-scale WSNs.Our goal is to provide a detection framework for FDIAs with the following properties: • Stealthy FDIA detection.The attacker's purpose is to use resourceful and sophisticated strategies to minimize the risk of being identified.Stealthy FDIAs may be employed, i.e., making the injected false data look as close to the genuine data as possible, such as by mimicking genuine data distributions and time series patterns.Since stealthy FDIAs are typically not easily observed, the detection framework should take this into account to reduce the likelihood of potential harm.• Distribute detection.The detection process might be centralized or distributed.In centralized detection, all sensor data are sent to a central node for thorough processing. In distributed detection, sensor data are evaluated separately by local sensors or edge devices, making it more responsive to data changes than centralized detection.More significantly, given the widely dispersed sensors and enormous data volumes in large-scale WSNs, distributed detection might be more straightforward to scale.• General detection.Large-scale WSNs are employed in various fields, and the physical behavior of such systems is diverse.Electric power systems, for example, can be defined using circuit equations, whereas thermodynamic systems can be represented using thermodynamic laws.Therefore, a detection framework based only on a measurement that does not require domain-specific a priori knowledge is necessary, which makes the detection method more general and allows for similar detection methods to be applied to sensors in different domains without much adaptation. Main Contributions We propose a correlation-based framework for detecting FDIAs, and our main contributions are sketched below. • We first develop a grouping approach based on the temporal correlation of the crosscorrelation between the time-series signals of pairwise sensors.All sensors are categorized into multiple correlated groups, and subsequent detection methods are performed separately within the groups. • We build an autoregressive integrated moving average (ARIMA) model for predicting future data from each sensor using historical time-series signals, which is used to learn the normal temporal correlation of the cross-correlation between data reported by pairwise sensors. • Based on the comparison of the normal and actual temporal correlation of the crosscorrelation within each group, the basis for determining the consistency of the pairwise sensor data is established.Then, majority voting is executed within each group to identify the abnormal sensors. • To verify the performance of the detection framework, we construct simple FDIAs and stealthy FDIAs in a genuine sensor dataset.The effectiveness of our proposed detection framework is verified through extensive simulation experiments. The subsequent materials are organized in this fashion: Section 2 reviews the related works.In Section 3, sensor data and correlation definitions are introduced.The detection framework is described in detail in Section 4, and the performance of the detection framework is corroborated through simulation experiments in Section 5. Finally, this work is summarized in Section 6. Related Work In this section, we make comments on the previous work that is related to the present paper, aiming to highlight the novelty of our work.Detecting FDIAs in sensors has received considerable attention.In this section, we categorize the existing related works into three research directions related to FDIA detection: FDIA detection methods and FDIA types. FDIA Detection Methods Recent studies have been conducted to detect FDIAs on sensors by modeling the physical behavior of the system.In general, the physical behavior of the system is established based on physical equations (fluid dynamics, electromagnetic laws, etc.) to predict the sensor data, and then the predicted data are compared to the actual data [8].Some attempts have been made to build predictive models to detect FDIAs through the dynamical equations of smart grids [9,10], unmanned aerial vehicles [11], water distribution systems [12], and cyber-physical systems [13][14][15].However, this detection approach requires appropriate predictive models for specific domains and relies on a priori knowledge of specific physical behaviors, which allows for limited scalability. Subsequent studies have explored techniques for detecting FDIAs from sensor measurements, with the majority of these works based on exploring inter-measurement correlations.Illiano et al. [16] presented an approach to detecting FDIAs in WSNs that combines measurement checks and authentication strategies.Aboelwafa et al. [17] addressed an approach to detecting FDIAs in the industrial Internet of Things that exploits sensor data correlation in time and space.Martovytskyi et al. [18] explored the method of FDIA detection, which is based on spatiotemporal correlation in smart grids.Berjab et al. [19] presented a method for detecting FDIAs in WSNs, which uses observed spatiotemporal and multivariate attribute sensor correlations.Huang et al. [20] addressed the problem of detecting FDIAs in dynamic WSNs based on spatial correlation.Based on the spatiotemporal correlation, Hu et al. [21] explored the idea of fault diagnosis to detect collusive FDIAs in WSNs.However, these efforts depend on centralized detection, increasing the complexity and cost of detection systems in the face of increasing data volumes. In contrast, distributed detection methods can be more easily scaled to large-scale sensor networks.Chen et al. [22] built distributed real-time detection algorithms based on spatiotemporal correlation to detect FDIAs in large-scale networked industrial sensing systems.Islam et al. [23] utilized distributed algorithms based on spatiotemporal correlation to detect data anomalies in large-scale intelligent transportation systems.Lai et al. [24] suggested a distributed approach to detecting FDIAs in WSNs using temporal, spatial, and event-based correlation.In this paper, our framework is based on a distributed approach, where detection methods can be executed at separate edge devices to reduce the network pressure associated with processing data generated by large-scale sensors. FDIA Types Another crucial consideration in FDIA detection is the type of attack.An adversary may employ simple attacks, such as randomly injecting high outliers and injecting false data with a common strategy.An adversary may employ stealthy attacks, such as constructing coherent attack signals.Most of the works [17][18][19][20][22][23][24] mentioned based on sensor measurements themselves are effective in detecting simple FDIAs, but not stealthy ones.For instance, in [22], based on the spatiotemporal correlation of sensor data, the authors used exponential weighted moving average and principal component analysis to establish a rotated ellipse area for each pair of sensors in a correlation group and detected FDIAs by determining whether the current sensor readings for each pair of sensors were located within the corresponding area of the rotated ellipse.Assuming that an attacker employs a collusive strategy whereby the current anomalous readings of a pair of sensors are also located within the corresponding area of the rotated ellipse, this may result in a false alarm. While some works [16,21] have considered the collusive scenario, further development is needed for when an attacker employs stealthy attacks that construct coherent attack signals (mimicking genuine data distributions and time series patterns).Therefore, in this paper, we propose a generalized detection framework that can be used to detect FDIAs in large-scale WSNs, including stealthy FDIAs.The approach we propose in this paper to meet these requirements, together with the previously mentioned works, is summarized in Table 1.[20] yes no yes no no Hu et al. [21] yes no yes yes no Chen et al. [22] yes yes yes no no Islam et al. [23] yes yes yes no no Lai et al. [24] yes yes yes no no Our approach yes yes yes yes yes Preliminaries We extract information from the sensor data itself to detect FDIAs.In this section, we discuss the definition of sensor data and the correlation between sensor data. Sensor Data Consider a set of sensors V = {v 1 , • • • , v N } distributed over a geographic area, where each sensor v i ∈ V collects one type of environmental data in synchronization with the other sensors.Let r i (t) denote the sensor measurement reported by v i at time t, as follows: where ri (t) is the true value and ϵ(t) is an error at time t.The error ϵ(t) can be caused by either a random error or a systematic error.A random error is an uncertainty in the measurement result caused by various random factors (e.g., noise), and a systematic error is an uncertainty in the measurement result due to inherent defects or biases (e.g., faults, FDIAs).Since our work focuses on detecting FDIAs on sensors, we only consider systematic errors caused by FDIAs.The collection of r i (t) from v i over a period of time is a timeseries signal [25].A time-series signal consisting of t successive sensor measurements r i (1), r i (2), • • • , r i (t) can be expressed as follows: Spatiotemporal Correlation between Sensor Data Spatiotemporal correlation is a combination of spatial and temporal correlation, referring to the simultaneous existence of correlations in space and time.The correlation of sensor data exists because sensors are distributed in space and measure time-dependent physical phenomena.The anomalous data generated when an FDIA occurs can go so far as to cause this correlation to be disrupted, so we can identify false data injection attacks by analyzing the correlation of sensor data [26]. Spatial Correlation Spatial correlation between sensor data over a fixed time interval reveals the degree of association between events or phenomena at adjacent or discrete locations in space.For example, in a smart grid, neighboring industrial facilities may belong to similar industries and, thus, have a similar electricity demand, resulting in a strong spatial correlation between meter data in industrial areas, but there may be a weak spatial correlation between meter data in industrial areas and meter data in residential areas. Temporal Correlation The temporal correlation of sensor data reveals the degree of association between events or phenomena over time.For example, in a smart grid, due to differences between day and night, seasonal factors, etc., by observing hourly, daily, weekly, or seasonal data from meters, it is possible to find repeating patterns or regularities in the use of electrical energy on different time scales. FDIA Detection Framework In this section, this paper proposes a framework for FDIA detection.This framework consists of three phases: correlation grouping, correlation prediction, and correlation testing, stated as follows: • Phase I: Correlation grouping.The purpose of this phase is to group V in a large-scale WSN based on historical sensor data so that sensors in the same group are highly correlated with other sensors. • Phase II: Correlation prediction.The purpose of this phase is to predict the normal temporal correlation of the cross-correlation between pairwise sensor measurements in the same group over a short period of time in the future. • Phase III: Correlation testing.The purpose of this phase is to test the actual sensor data based on the predicted normal temporal and spatial correlations. The flow diagram for FDIA detection in large-scale WSNs is shown in Figure 1.Next, let us discuss the three phases in detail. Correlation Grouping Collect sensor data, ensuring that the data are collected at the same or similar frequencies, and pre-process the data if necessary, including de-noise, filling in missing values, interpolating, and other operations to facilitate analysis.Standardize the sensor data (e.g., min-max normalization, z-score normalization) to ensure that the measurements from different sensors are similarly scaled so that the magnitude of the change in one sensor does not affect the cross-correlation results. Let R i (T) = {r i (1), r i (2), • • • , r i (T)} denote the Historical Time-series Signal (HTS) of v i obtained after data processing, where T denotes the length of the HTS.The crosscorrelation of any two full HTSs is usually calculated to determine the spatial correlation between R i (T) and R j (T), expressed as follows: where and denote the standard deviations of R i (T) and R j (T), respectively.C ij denotes the correlation coefficients of v i and v j at lag τ; ri and rj represent the average values of two full HTSs from v i and v j .The lag τ represents the delay of one HTS with respect to the other, and by analyzing the peak of cross-correlation, it is determined at which lag value the correlation between the two HTSs is greatest.C ij has a value between 1 and −1, where 1 means perfect positive correlation, −1 means perfect negative correlation, and 0 means the signals are uncorrelated [27]. However, this paper's goal is to extract the temporal correlation of the cross-correlation between any two HTSs, so the sliding window cross-correlations need to be computed. First, let the size of the sliding window be k.The wth sub-signal, consisting of k successive sensor measurements r i (w), • • • , r i (w + k − 1) within [1, t], can be defined as Therefore, the HTS of v i is segmented into multiple historical sub-signals, where W denotes the number of historical sub-signals. Second, for R i and R j , the cross-correlation is computed within each sliding window, denoted as Here, cov(R i (T, w), R j (T, w)) represents the covariance between R i (T, w) and R j (T, w); δ(R i (T, w)) and δ(R j (T, w)) represent the standard deviations of R i (T, w) and R j (T, w), respectively.Then, the time series of the cross-correlation of v i and v j can be represented by Finally, we pick C ij with a positive correlation for K-means clustering, which is one of the most widely used parameter selection methods.After K-means clustering, the sensors can be categorized into multiple correlation groups. For a dataset with M time series of cross-correlation, we represent C ij with a positive correlation as a feature vector.We extract relevant features that capture the characteristics of the time series; commonly used features include mean, standard deviation, slope, etc.Each C ij with a positive correlation is represented as a feature vector where k is the number of features and v p includes all necessary extracted features (mean, standard deviation, slope, etc.).The random cluster centers u 1 , u 2 , • • • , u K are first selected, and then the K-means objection function is defined as follows: where x pq is an indicator function indicating if the time series p belongs to cluster q (q = 1, 2, • • • , K), and ∥•∥ 2 denotes the squared Euclidean distance. We update the centroids of the clusters by calculating the mean feature vector for each cluster: where |C q | is the number of time series in cluster q.We repeat the centroid's update and minimization of J until convergence.Then, Definition 1.Let V q i = {v j |C ij ∈ clustr q, j ̸ = i} be the set of sensors consistent with sensor v i in cluster q obtained according to HTSs, and let V q = {v i | |V q i | N−1 > 50%} be the set of sensors that are grouped in q according to HTSs.Remark 1. Figure 2 illustrates the correlation grouping of four sensor nodes.After correlation grouping, each group's sensor data can be sent to a separate edge device for distributed processing to reduce network pressure and improve processing efficiency [28].The following stages are performed within each group: correlation prediction and correlation testing. Correlation Prediction Next, we predict the normal temporal correlation of cross-correlation between pairwise sensor measurements in each group over a short period of time in the future. Consider pairwise sensors v i and v j in a group.As we discussed in the previous subsection, the measurements of v i and v j should be temporally correlated with their previous measurements.Therefore, this subsection uses the Autoregressive Integrated Moving Average (ARIMA) model to predict the future time-series signal of each sensor based on the HTS, which is referred to as the Estimated Time-series Signal (ETS).ARIMA is used as a time series predictive analysis method, which requires only historical data to make predictions and has the ability to be widely applied to a wide range of time series data. ARIMA combines the concepts of autoregression (AR), moving average (MA), and the operation of differencing the time series signals.Specifically, the autoregressive part represents the relationship between the current value of a variable and its value at p ′ previous moments, where p ′ denotes the autoregressive order.The moving average part represents the relationship between the current value and the error (white noise) at q ′ previous moments, where q ′ denotes the moving average order.The d-order differencing operation is performed to remove trends and seasonality from HTSs.Therefore, an ARIMA model is used to fit the trend and periodicity of the HTS by choosing appropriate parameters (p ′ , d, q ′ ) to make forecasts of the ETS [29]. First, a suitable d is chosen using the following difference method: where ∆r i (t) = r i (t) − r i (t − 1) denotes the first-order difference at time point t.The suitable value is d when the sequence after d-order differencing of the HTS passes the Augmented Dickey-Fuller (ADF) test [30]. Second, for all possible combinations of p ′ and q ′ , an ARIMA model is fitted using the information criterion (AIC) to select the best combination of p ′ and q ′ as the one with the smallest AIC value.The formula for calculating AIC is as follows: where L is the maximum likelihood estimate of the model, and l is the number of parameters of the model.Third, the HTS is fitted using an ARIMA model with order (p ′ , d, q ′ ), which is formulated as follows: where µ, ϕ l , and ψ l are model parameters, and ϵ(t) stands for the value of the independent error at time t, which follows a Gaussian distribution with a zero mean.The fitted model is tested to see if it matches the characteristics of the data, including the autocorrelation and partial autocorrelation of the residuals and normality of the residuals.Finally, assuming that the fitted model is used to predict future data, the ETSs can be made by the difference restoration of predicted data.An estimated time-series signal consisting of t successive sensor measurements r ′ i (1), Then, let , where S denotes the length of the ETS. The ETS and HTS are concatenated into a new time-series signal X i (t), which consists of t successive sensor measurements and can be expressed as follows: Similar to Equation ( 6), the wth estimated sub-signal consisting of k successive sensor measurements within [1, t] can be defined as } is calculated in the same manner as in Equation ( 7) based on X i (t, w) and X j (t, w), where Therefore, C ′ ij represents the normal temporal correlation of cross-correlation between pairwise sensor measurements in a group.The diagram of correlation prediction within a group is shown in Figure 3. Correlation Testing After correlation prediction, we compare C ′ ij with the actual ones to detect FDIAs in this subsection. An actual time-series signal consisting of t successive sensor measurements r * i (1), r * i (2), • • • , r * i (t) can be expressed as follows: Then, let } denote the Actual Time-series Signal (ATS) of v i , where S denotes the length of the ATS. The ATS and HTS are concatenated into a new time-series signal Y i (t), which consists of t successive sensor measurements and can be expressed as follows: Similar to Equation ( 6), the wth actual sub-signal, which consists of k successive sensor measurements within [1, t], can be defined as Y i (t, w).Consider each pair of v i and v j in a group; } is calculated in the same manner as in Equation ( 7) based on Y i (t, w) and Y j (t, w), where w = 1, 2, • • • , S. Therefore, C * ij represents the actual temporal correlation of cross-correlation between pairwise sensor measurements in a group. Consider each pair of v i and v j in the group q ⊆ V. Based on C ′ ij and C * ij , we have where are the standard deviations of C ′ ij and C * ij , respectively.Then, we have where e ij is for the consistency criterion, and e ij = 1(resp.0)denotes that v i and v j are consistent (resp.inconsistent). Remark 2. The choice of threshold θ depends on the experience of C ij in Phase I, and the performance of the model on the test set can be observed by trying different thresholds and selecting the one with the best performance. Definition 2. Let N a i = {v j |e ij = 1 or e ij = 0, i ̸ = j} be the set of all neighbors of v i , and N c i = {v j |e ij = 1, i ̸ = j} be the set of consistent neighbors of v i obtained according to the comparison of ETSs and ATSs. < 50%} be the set of abnormal nodes in group q.The diagram of correlation testing within a group is shown in Figure 4. Effectiveness of the Proposed Framework This section is devoted to investigating the effectiveness of the framework through simulation experiments. Experiment Preparation We applied the detection framework to an hourly electricity demand dataset by subregion, which was based on the 2020 US Energy Information Administration State Electricity Profiles (available at http://www.eia.gov/,accessed on 2 June 2023).This dataset was chosen because it was derived from widely distributed smart meters to evaluate the effectiveness of our proposed framework in detecting FDIAs. By visualizing the dataset, we found that the time series data show a pronounced periodicity with a period length of 24.Therefore, the model parameters used for this dataset were obtained through observation and manual grid search, as shown in Table 2. Table 2. Model parameters used on the dataset. Parameters Value The HTSs' size T 4246 h The ETSs' size S 120 h Sliding window size k 720 h Threshold θ 0 Experiments and Analysis of Experimental Results Figure 5 illustrates the results for one of the groups, consisting of a set of sensors {v 1 , v 2 , • • • , v 7 }, after correlation grouping and data fitting for HTSs.In Figure 5, we visualize only the data points with a step size of 24 to display the fitting results clearly.As can be seen from the figure, there is a strong correlation between the HTSs within a group, and our approach effectively fits the HTSs. Figure 6 illustrates the comparison results of ETSs and ATSs for the group after correlation prediction.It is seen that our approach can effectively predict future data.To further validate the effectiveness of our framework for detecting FDIAs, we performed correlation testing for various FDIA strategies on target signals.Moreover, we compared our approach with the SCCR solution given in previous work [18], where the SCCR is a consistent ellipse area formed by spatiotemporal correlations.In our experiments, the confidence degree of the consistency ellipse was set to 95%. In addition, we used three different metrics: successful detection rate, false-negative detection rate, and false-positive detection rate.The successful detection rate is the proportion of actual abnormal nodes that are correctly identified; the false-negative detection rate is the proportion of actual abnormal nodes that are incorrectly identified as normal; and the false-positive detection rate is the proportion of actual normal nodes that are incorrectly identified as abnormal nodes. The Simple FDIA A simple FDIA means randomly generating an attack signal.Assuming that r 3 (t) is chosen as the target of the attack of the group, the power demand of v 3 is randomly increased by 50%, as shown in Figure 7.In our solution, Figure 8 and the second line of Table 3 show the results of comparing the normal temporal correlation of cross-correlation with the actual temporal correlation of cross-correlation of v 3 within the group in a simple FDIA.As shown in Figure 8, from the start of the FDIA, the change in the trend of C * ij relative to the trend of C ′ ij is clearly inconsistent.In the SCCR solution, we can also observe the inconsistency of v 3 with other nodes.The proposed framework and SCCR solution are able to accurately detect the simple FDIA on v 3 .We conducted a total of 100 similar experiments in all groups, in which the framework proposed in this paper and the SCCR solution were able to detect at least 99% of FDIAs (Figure 9a), and the false-negative detection rate (Figure 9b) and false-positive detection rate (Figure 9c) were almost zero.Therefore, we conclude that, in general, the framework proposed in this paper performs well in detecting simple FDIAs. The Stealthy FDIA The stealthy FDIA means the attacker injects in a well-designed way that is generally not easily observable.Assuming that r 3 (t) is chosen as the target of the attack and that the attacker is able to learn the time series of v 3 of the group, in this case, the power demand of v 3 slowly increases within the detected threshold (boiling frog attack [31]) and also exhibits periodicity from t = 4247 h, as shown in Figure 10.In our solution, Figure 11 and the third line of Table 3 show the results of comparing the normal temporal correlation of cross-correlation with the actual temporal correlation of cross-correlation of v 3 within the group in stealthy FDIA.As shown in Figure 11, from the start of the FDIA, the change in the trend of C * ij relative to the trend of C ′ ij is gradually inconsistent.However, in the SCCR solution, we do not identify any outliers during the first 66 h of the FDIA, after which the abnomal nodes v 3 , v 6 , and v 7 are identified.This result is caused by the fact that at the beginning of the FDIA, the outliers are within the detection threshold of the SCCR solution, leading to unrecognized anomalies, which are then considered normal to build the consistency ellipse, resulting in a high rate of false positives. We conducted a total of 100 similar experiments in all groups, in which both the framework proposed in this paper and the SCCR solution were able to detect at least 99% of FDIAs (Figure 9a), and the false-negative detection rate was almost zero (Figure 9b); the false-positive detection rate for the framework proposed in this paper was almost zero, while the false-positive detection rate for the SCCR solution was up to 14% (Figure 9c).In addition, we observed that long-term attack signals resulted in stronger inconsistencies than short-term attack signals.Therefore, we conclude that, in general, the framework proposed in this paper performs well for detecting stealthy FDIAs on a single sensor, and our approach is superior compared to the SCCR solution in detecting long-term and stealthy attack signals.However, the inconsistency was not obvious from the beginning of the FDIA.Therefore, it is necessary to choose a suitable ETS size or sliding window size when detecting stealthy FDIAs.In addition, assuming there is a collision, the attacker chooses the next node whose data are consistent with node v 3 as the next attack target to work in concert.With v 2 chosen as the next attack target, the same FDIA strategy is used to construct an attack signal for v 2 after the attacker learns the cross-correlation between v 2 and v 3 .Figure 12 shows the signals of v 2 and v 3 with FDIA and without FDIA, where the FDIA starts at t = 4247 h.In our solution, Figure 13 and the fourth line of Table 3 show the results of comparing the normal temporal correlation of cross-correlation with the actual temporal correlation of cross-correlation of v 2 and v 3 within the group in stealthy and collusive FDIAs.As shown in Figure 13, from the FDIA's start, the change in the trend of C * 32 relative to the trend of C ′ 32 is relatively consistent, and ρ 32 = 0.98 also indicates that the readings of the collusive nodes are consistent.Due to the proposed voting algorithm, the framework can still detect the stealthy and collusive FDIAs on v 2 and v 3 .However, in the SCCR solution, we similarly do not identify any outliers during the first 66 h of the FDIA, after which the abnormal nodes v 2 , v 3 , v 6 , and v 7 are identified. We conducted a total of 100 similar experiments in all groups, in which both the framework proposed in this paper and the SCCR solution were able to detect at least 95% of FDIAs (Figure 9a) with no more than a 5% false-negative detection rate (Figure 9b); the false-positive detection rate for the framework proposed in this paper was, again, no more than 3%, while the false-positive detection rate for the SCCR solution was as high as 19% (Figure 9c).Furthermore, we observed that long-term attack signals result in stronger inconsistencies than short-term attack signals.Therefore, we conclude that, in general, the framework proposed in this paper performs well for detecting stealthy FDIAs in two collusive sensors, and our approach is superior compared to the SCCR solution in detecting long-term, stealthy, and collusive attack signals.However, as the number of collusive sensors increased, we observed a performance degradation.The proposed detection algorithm fails when the number of collusive sensors exceeds 50%.This is due to the fact that the detection algorithm uses majority voting, and more than 50% of the sensors must be normal to ensure the performance of the detection.Overall, the framework proposed in this paper performs best in detecting simple and stealthy FDIAs in single-sensor scenarios and is relatively effective in detecting stealthy FDIAs in multi-sensor scenarios. Conclusions and Future Works This paper presents a novel detection framework for FDIAs on large-scale WSNs.The framework consists of three phases.The first stage groups the sensors, which is based on the temporal correlation of the cross-correlation between the pairwise sensors.The second phase proposes a model for learning the temporal correlation of the crosscorrelation.The third stage establishes consistency criteria within each group and votes out the abnormal nodes.We validated the performance of the framework by simulating simple FDIAs and stealthy FDIAs on a real dataset. However, the detection framework also has some limitations.First, this paper only considers the scenario where FDIAs exist, and the framework is not designed to distinguish between FDIAs and natural anomalies, disruptive events, etc.Second, ARIMA is usually more suitable for forecasting problems with one-dimensional time series data, while for more complex problems, especially when multidimensional data are involved, the method needs to be further optimized.In addition, the voting algorithm fails to detect FDIAs on more than 50% of the sensors, and there is merit in exploring detection methods in the collusion-tolerant anomaly.Thus, there is value in further research on an anomaly score aggregation that tolerates collusion, and future work on the detection framework can be optimized by exploring other techniques to distinguish between FDIAs and natural anomalies.In addition, using a distributed detection framework that takes into account the trade-off between cost and criticality, the work can be conducted in the context of an optimization problem, such as the allocation of defense resources [32,33].Finally, the framework proposed in this paper can be generalized to other correlation-based problems, such as advanced persistent threat detection [34,35], DDoS detection [36,37], and event-triggered state estimation [38,39]. Figure 1 . Figure 1.The flow diagram for FDIA detection in large-scale WSNs. Figure 2 . Figure 2. The correlation grouping of four sensor nodes. Figure 3 . Figure 3.The diagram of correlation prediction within a group. Figure 4 . Figure 4.The diagram of correlation testing within a group. 2 fitted HTS of v 1 HTS of v 1 HTS of v 5 HTS of v 6 HTS of v 7 −Figure 5 . Figure 5.The results for one of the groups after correlation grouping and data fitting for HTSs. 4 ATS of v 5 ATS of v 6 ATS of v 7 ATS of v 1 ETS of v 1 ETS of v 3 ATS of v 3 ETS of v 2 ATS 5 ETS of v 6 ETS of v 7 Figure 6 . Figure 6.The comparison results of ETSs and ATSs for the group after correlation prediction. Figure 8 . Figure 8.The results of comparing the normal temporal correlation of cross-correlation with the actual temporal correlation of cross-correlation of v 3 within the group in a simple FDIA. Figure 9 . Figure 9.The comparison results of three metrics: (a) successful detection rate; (b) false-negative detection rate; (c) false-positive detection rate. 37 −Figure 11 . Figure 11.The results of comparing the normal temporal correlation of cross-correlation with the actual temporal correlation of cross-correlation of v 3 within the group in a stealthy FDIA. Figure 12 . Figure 12.The stealthy and collusive FDIA on v 2 and v 3 . Figure 13 . Figure 13.The results of comparing the normal temporal correlation of cross-correlation with the actual temporal correlation of cross-correlation of v 2 and v 3 in stealthy and collusive FDIAs. Table 3 . The comparison results of the temporal correlation of cross-correlation.
8,122.8
2024-03-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Effects of Scallop Mantle Toxin on Intestinal Microflora and Intestinal Barrier Function in Mice Previous studies have shown that feeding mice with food containing mantle tissue from Japanese scallops results in aggravated liver and kidney damage, ultimately resulting in mortality within weeks. The aim of this study is to evaluate the toxicity of scallop mantle in China’s coastal areas and explore the impact of scallop mantle toxins (SMT) on intestinal barrier integrity and gut microbiota in mice. The Illumina MiSeq sequencing of V3-V4 hypervariable regions of 16S ribosomal RNA was employed to study the alterations in gut microbiota in the feces of SMT mice. The results showed that intestinal flora abundance and diversity in the SMT group were decreased. Compared with the control group, significant increases were observed in serum indexes related to liver, intestine, inflammation, and kidney functions among SMT-exposed mice. Accompanied by varying degrees of tissue damage observed within these organs, the beneficial bacteria of Muribaculaceae and Marinifilaceae significantly reduced, while the harmful bacteria of Enterobacteriaceae and Helicobacter were significantly increased. Taken together, this article elucidates the inflammation and glucose metabolism disorder caused by scallop mantle toxin in mice from the angle of gut microbiota and metabolism. SMT can destroy the equilibrium of intestinal flora and damage the intestinal mucosal barrier, which leads to glucose metabolism disorder and intestinal dysfunction and may ultimately bring about systemic toxicity. Introduction By 2022, China will produce more than 20 million tons of mariculture aquatic products, including nearly 16 million tons of shellfish and nearly 1.8 million tons of scallops.There is a wide variety of scallops, with about 45 species distributed along China's coasts, among which the most common and important economic scallops are Chlamys farreri, Bay scallops, and Patinopecten yessoensis.Scallops have long been one of the important species of mariculture in China.The farmed output of marine shellfish accounts for 90.79% of the world's marine shellfish output.The top 10 producers are China, the United States, Japan, South Korea, Chile, Spain, Thailand, France, Canada, and Italy, accounting for 94.17% of the world's marine shellfish output [1].China's scallop output mainly comes from breeding, with a small proportion of fishing.In 2021, China imported USD 253 million scallops in 2021 and exported USD 305 million. Shellfish processing can be divided into purification, pretreatment, deep processing, and waste treatment according to the technological process.The finished products mainly include fresh products, frozen products, dry products, cured and smoked products, tank products, additives, condiments, small packaging leisure products, and medical products [2].Scallops are an important seafood in Hokkaido, Japan.The scallop mantle in Japan is commonly consumed in various forms, such as when raw, smoked, dried, barbecued, frozen, and boiled.Recent studies have revealed that the scallop mantle possesses a rich content of aminoglyaccharides, unsaturated fatty acids, active peptides, taurine, and other bioactive substances.These components exhibit diverse physiological functions, including anti-aging properties, anti-tumor effects, antioxidant activity, blood lipid reduction abilities, and immune regulatory capabilities [3].In order to improve the added value of shellfish products, Qin Yu et al. [4] studied the process of preparing seafood soy sauce by fermentation in 2021, using scallop mantle as the main raw material.In 2013, Wanghui Qing et al. [5] developed scallop mantle sausage with a unique seafood flavor using scallop mantle and chicken as raw materials.Through investigation, it is found that there are many scallop mantle processed foods in the Chinese market, which are mainly sold in the form of dry products, smoked products, ready-to-eat snacks, canned products and hoisin sauce, etc.In certain coastal regions, the consumption of raw or cooked dishes prepared with scallop mantle as a primary or ancillary ingredient is prevalent among the local population [6]. In 2018, Yasushi Hasegawa [7] of Japan first identified a new kind of shellfish toxin, which is distinct from DSP and PSP toxin in Patinopecten yessoensis scallops after feeding scallop mantle to rats; the rats lost their appetite and died a few weeks later.So far, Hasegawa et al. have isolated and identified this new shellfish poison and characterized it as a proteotoxin capable of inducing liver and kidney damage in rats.It has been observed that this toxin exhibits remarkable stability within mantle tissues, particularly when exposed to acidic conditions and digestive enzymes, rendering it resistant to decomposition even at high temperatures.They conducted acute toxicity tests on the mantle tissue.Rats fed with 20% mantle tissue did not have significantly increased toxicity compared to rats fed with 1% mantle tissue, indicating that raising mantle tissue did not have acute toxicity.Studied the mantle tissue toxicity of small intestine wall tissue.Long-term feeding on mantle tissue can alter the color of the small intestine in rats.Real-time polymerase chain reaction analysis showed that the uptake of mantle tissue caused changes in the small intestine's inflammation and endoplasmic reticulum stress markers.These outcomes indicate that scallop mantle feeding leads to toxicity after initial damage to the small intestinal tissue [8]. The gut microbiota is a complex microbial community that inhabits the digestive tract of animals and plays a significant role in host metabolism [9], nutrient absorption or production, and the immune system, making a significant contribution to the wellness of the parasite.The small intestine serves as the primary barrier to prevent pathogens and toxins from entering the human body [10], maintaining the steady state of intestinal flora and safeguarding organ integrity.The intestinal barrier integrity damage or defects may lead to bacteria imbalance and hazardous substances through the epithelial barrier, leading to intestinal inflammatory.The intestinal microbiota plays a significant role in mediating interactions between parasite metabolism and environmental materials. After entering the human body, fungal toxins first interact with the gastrointestinal tract.The synergistic effect between the gut microbiota and the gastrointestinal tract can protect the parasite from fungal toxins.However, the specific changes of the individual intestinal microbiota disturbed by SMT are still unclear.Patinopecten yessoensis is one of the main aquatic products in the northeast region of China.Besides the adductor muscle, scallop mantle tissue is consumed in China.We must ensure the health and safety of human life and remove the new shellfish toxins in the scallop mantle so that humans can eat scallop products.This article aims to investigate the presence of a new type of shellfish toxin on scallop mantle in Dalian, Liaoning Province, China, and the effects of this toxin on the liver, kidney, and intestine of mice and to study the effects of a new type of shellfish toxin on intestinal flora and intestinal injury.This paper elucidates the toxic mechanism of scallop toxin from the perspective of intestinal flora, which provides a new idea for scientific, reasonable, and rapid detection technology and detoxification and also provides important theoretical and practical support for the safety of scallop mantle processing and consumption. Changes in Body Weight and Food Intake To examine the subacute oral toxicity of the mantle from China, SPF mice were fed a diet including 2% mantle.There were no discernible differences in overall appearance or behavior between the SMT and CON groups of mice, and no clinical symptoms were discovered within 2 weeks after commencing the mantle diet.Nevertheless, after 3 weeks, the food intake of the mice in the mantle diet group began to decrease significantly, and the body weight of the mice decreased accordingly.After a duration of 4 weeks, the mice on the mantle diet exhibited a reduction in food intake to approximately 40% of their initial consumption (Figure 1b).Furthermore, their body weight experienced a decline to around 60% of the average weight observed in the SMT group (Figure 1a).Prolonged consumption of the mantle diet for an additional period of 4-5 weeks resulted in symptoms such as dyspnea, abnormal movement patterns, a tendency toward lying behavior, tremors, decreased responsiveness, lethargy, weakness, and wasting.The mortality rate was notably high among mice in this stage experiment.The mice died at 5 weeks after administration, and the organs were measured as a percentage of body weight (Table 1).There was an important increase in kidney weight in the mantle diet group, while there was no important difference in liver, stomach, or intestinal weight between the control and mantle diet groups.This suggests that scallop coats in coastal regions of China also contain toxins.In order to further verify whether the toxins contained were the same as those in Japan, further liver and kidney toxicity tests were performed. scallop mantle tissue is consumed in China.We must ensure the health and safety of human life and remove the new shellfish toxins in the scallop mantle so that humans can eat scallop products.This article aims to investigate the presence of a new type of shellfish toxin on scallop mantle in Dalian, Liaoning Province, China, and the effects of this toxin on the liver, kidney, and intestine of mice and to study the effects of a new type of shellfish toxin on intestinal flora and intestinal injury.This paper elucidates the toxic mechanism of scallop toxin from the perspective of intestinal flora, which provides a new idea for scientific, reasonable, and rapid detection technology and detoxification and also provides important theoretical and practical support for the safety of scallop mantle processing and consumption. Changes in Body Weight and Food Intake To examine the subacute oral toxicity of the mantle from China, SPF mice were fed a diet including 2% mantle.There were no discernible differences in overall appearance or behavior between the SMT and CON groups of mice, and no clinical symptoms were discovered within 2 weeks after commencing the mantle diet.Nevertheless, after 3 weeks, the food intake of the mice in the mantle diet group began to decrease significantly, and the body weight of the mice decreased accordingly.After a duration of 4 weeks, the mice on the mantle diet exhibited a reduction in food intake to approximately 40% of their initial consumption (Figure 1b).Furthermore, their body weight experienced a decline to around 60% of the average weight observed in the SMT group (Figure 1a).Prolonged consumption of the mantle diet for an additional period of 4-5 weeks resulted in symptoms such as dyspnea, abnormal movement patterns, a tendency toward lying behavior, tremors, decreased responsiveness, lethargy, weakness, and wasting.The mortality rate was notably high among mice in this stage experiment.The mice died at 5 weeks after administration, and the organs were measured as a percentage of body weight (Table 1).There was an important increase in kidney weight in the mantle diet group, while there was no important difference in liver, stomach, or intestinal weight between the control and mantle diet groups.This suggests that scallop coats in coastal regions of China also contain toxins.In order to further verify whether the toxins contained were the same as those in Japan, further liver and kidney toxicity tests were performed. Effects of Scallop Mantle Toxin on Liver Damage Liver serum markers and histopathology were analyzed to test for mantle epithelial layer toxicity.We measured liver serum markers in mice: AST, ALT, TG, TC, γ-GTP, and TBA.Mice fed the mice mantle had significantly higher vitality of ALT, AST, TC, γ-GTP, TG, and TBA (p > 0.05).The liver is the only organ in the human body without painsensing nerves [11], and it is easily destroyed by toxic chemicals, leading to metabolic dysfunction [12].Under the stimulation of alcohol metabolism, oxidative stress occurs in the liver, causing damage to liver cells.ALT and AST (Figure 2c,d) are the most direct and sensitive indicators to reflect the damage to liver cells.Dong et al. [13] found that ALT and AST are high in healthy liver but low in blood.Once liver cells are damaged, a large amount of ALT and AST will be released into the serum, raising the serum level.ALT and AST levels reflect the damage degree of liver cells in a certain range [14].Reddy et al. [15] point out that TG and TC levels (Figure 2e,f) in serum would increase when liver lipid metabolism was impaired.When hepatocellular disease or intra-and extra-hepatic obstruction occurs, the metabolism of bile acid is obstructed, and the serum total bile acid concentration is increased.Therefore, changes in TBA (Figure 2h) levels can sensitively reflect liver function.Elevated serum γ-GTP levels (Figure 2g) suggest liver injury.In the control group (Figure 2a), the structure of hepatic lobules and nuclei was clear, hepatic lobules were normal, there was no obvious congestion in hepatic sinuses, and no inflammatory cell infiltration was visible in liver cells.The liver tissue of the scallop mantle group (Figure 2b) mice showed obvious cell shrinkage and unclear boundary of hepatic lobules.The nuclear structure was seriously broken, and evident inflammatory cell infiltration was observed in the liver cells. Therefore, we have observed that the consumption of scallop mantle can induce liver toxicity and damage in mice.The patterns of liver tissue injury align with those of Yasushi Hasegawa et al. [7]. Effects of Scallop Mantle Toxin on Nephridial Tissue Damage In order to examine the toxicity of the mantle, we found the kidney serum markers and histopathology.CRE and BUN in serum are important indexes to evaluate kidney function; BUN is the main product of metabolism in the body.Its level will elevate when renal insufficiency occurs, and it can be used as a marker for early identification of kidney injury.CRE can accurately reflect the glomerular filtration function, and the level of CRE increases during renal injury.As can be seen from the picture (Figure 3c,d), serum CRE and BUN contents in the scallop mantle group were significantly increased compared with the CON group (p > 0.05).The results indicated that the mice of the scallop mantle group had kidney damage.In the control group (Figure 3a), the structure of glomeruli and renal tubules was clear and complete, and there was no abnormal morphological structure.Compared with the CON group, the kidney tissues of mice in the scallop mantle group (Figure 3b) exhibited obvious pathological changes, which were mainly manifested as glomerular contraction malformation, tubular dilatation and necrosis, severe expansion of proximal and distal curved tubules, and inflammatory cell infiltration in renal interstitium.Therefore, we discover that a feed diet containing the scallop mantle can produce toxic effects on the kidneys of mice, causing injury.The results of renal tissue injury were similar to those of Yasushi Hasegawa et al. Effects of Scallop Mantle Toxin on Intestinal Tissue Damage In the control group (Figure 4a), the jejunum mucosa was intact, the villi were arranged neatly, and the intestinal epithelial cells were of clear shape, densely arranged, and of uniform size.No obvious pathological changes were observed.In comparison with the CON group, damage to the intestinal barrier and structural integrity of the jejunum was obvious in the SMT group.There was significant inflammatory cytokine infiltration, and the villi height was also reduced (Figure 4c).As is shown in the figures (Figure 4d,f), serum concentrations of LPS, IL-6, and TNF-α of mice in the SMT group were significantly increased compared with those in the CON group, TNF-α, IL-6, and LPS are measures of pro-inflammatory cytokines, and enhanced serum concentrations indicate an inflammatory response in the body.An intact intestinal structure is essential for maintaining intestinal barrier function. Effects of Scallop Mantle Toxin on Intestinal Tissue Damage In the control group (Figure 4a), the jejunum mucosa was intact, the villi were arranged neatly, and the intestinal epithelial cells were of clear shape, densely arranged, and of uniform size.No obvious pathological changes were observed.In comparison with the CON group, damage to the intestinal barrier and structural integrity of the jejunum was obvious in the SMT group.There was significant inflammatory cytokine infiltration, and the villi height was also reduced (Figure 4c).As is shown in the figures (Figure 4d,f), serum concentrations of LPS, IL-6, and TNF-α of mice in the SMT group were significantly increased compared with those in the CON group, TNF-α, IL-6, and LPS are measures of pro-inflammatory cytokines, and enhanced serum concentrations indicate an inflammatory response in the body.An intact intestinal structure is essential for maintaining intestinal barrier function. After eating the diet consisting of scallop mantle, the intestinal structure of mice experienced severe damage, which led to the breakdown of intestinal barrier function and systemic inflammation. Effect of Scallop Mantle Toxin on the Alpha and Beta Diversity By high-through sequencing analysis of 16S rDNA genes isolated from feces of bacteria, the effect of scallop mantle toxin on mice intestinal flora was studied [16].Alpha diversity is utilized for assessing the microbial community diversity within a given sample.By calculating the values of each sample, the diversity index can provide insights into the richness, diversity, and uniformity of microbial communities in the sample.After eating the diet consisting of scallop mantle, the intestinal structure of mice experienced severe damage, which led to the breakdown of intestinal barrier function and systemic inflammation. Effect of Scallop Mantle Toxin on the Alpha and Beta Diversity By high-through sequencing analysis of 16S rDNA genes isolated from feces of bacteria, the effect of scallop mantle toxin on mice intestinal flora was studied [16].Alpha diversity is utilized for assessing the microbial community diversity within a given sample.By calculating the values of each sample, the diversity index can provide insights into the richness, diversity, and uniformity of microbial communities in the sample. As shown in Figure 5a,b, the SMT group had no significant impact on the Shanon and Chao1 indexes of fecal intestinal flora α-diversity compared with the control group (p > 0.05).Additionally, the CON and SMT groups had 1031 and 951 OTUs, respectively, with a total of 899 common operating taxonomic units (OTUs) between the two groups (Figure 5c).These results indicated that the SMT group was less than those of the CON group, suggesting that the bacterial community richness decreased due to SMT.In addition, in order to understand the effect of SMT on the intestinal microflora profile of mice, principal component analysis (PCA) of Bray-Curtis distance was performed based on OTU.The SMT group showed a significant deviation from the control group, revealing another mode (Figure 5d).The result further showed that the composition of intestinal flora in SMT intervention mice has undergone significant changes.(Figure 5c).These results indicated that the SMT group was less than those of the CON group, suggesting that the bacterial community richness decreased due to SMT.In addition, in order to understand the effect of SMT on the intestinal microflora profile of mice, principal component analysis (PCA) of Bray−Curtis distance was performed based on OTU.The SMT group showed a significant deviation from the control group, revealing another mode (Figure 5d).The result further showed that the composition of intestinal flora in SMT intervention mice has undergone significant changes. Effect of Scallop Mantle Toxin on Intestinal Microbial Composition The dominant species influence the ecological and functional structure of the microbial community to a great extent.Comprehending the species composition of communities at various levels may effectively explain the formation, change, and ecological effect of community structure [17].We selected the top 20 species based on their average abundance ranking from samples for demonstration. At the phylum level (Figure 6a), there is the Bacteroidota (58.19%),Firmicutes (28.92%), and Verrucomicrobiota (7.77%) in the control group and the main Bacteroidota (45.34%) and Firmicutes (27.05%) and Proteobacteria (12.01%) in the scollop mantle group.The abundance of Proteobacteria in the SMT group increased compared with that of the CON group.The gut microbiome interacts with the host to maintain homeostasis, and imbalances can lead to disease.Firmicutes and Bacteroidetes (F/B) are the most diverse phyla that affect host physiology in humans and mice; the imbalanced F/B ratio is related to different disease processes [18].Compared with the control group, the F/B of the scallop mantle group increased, indicating an imbalance in the proportion of intestinal flora in the SMT group.We speculated that the intake of scallop mantle promoted an increase in F/B, indicating that scallop mantle destroyed the structure of intestinal flora and exacerbated metabolic Effect of Scallop Mantle Toxin on Intestinal Microbial Composition The dominant species influence the ecological and functional structure of the microbial community to a great extent.Comprehending the species composition of communities at various levels may effectively explain the formation, change, and ecological effect of community structure [17].We selected the top 20 species based on their average abundance ranking from samples for demonstration. At the phylum level (Figure 6a), there is the Bacteroidota (58.19%),Firmicutes (28.92%), and Verrucomicrobiota (7.77%) in the control group and the main Bacteroidota (45.34%) and Firmicutes (27.05%) and Proteobacteria (12.01%) in the scollop mantle group.The abundance of Proteobacteria in the SMT group increased compared with that of the CON group.The gut microbiome interacts with the host to maintain homeostasis, and imbalances can lead to disease.Firmicutes and Bacteroidetes (F/B) are the most diverse phyla that affect host physiology in humans and mice; the imbalanced F/B ratio is related to different disease processes [18].Compared with the control group, the F/B of the scallop mantle group increased, indicating an imbalance in the proportion of intestinal flora in the SMT group.We speculated that the intake of scallop mantle promoted an increase in F/B, indicating that scallop mantle destroyed the structure of intestinal flora and exacerbated metabolic disorders.Proteobacteria are considered indicators of ecological imbalance and disease risk. Following the consumption of scallop mantle, the increased abundance of Proteobacteria and faster growth of intestinal pathogens may be associated with the ingestion of bacteriocins produced by scallop mantle.At the family level (Figure 6b), compared with the control group, the abundance of the beneficial bacterium Muribaculaceae significantly decreased, while the abundance of harmful bacterium Enterobacteriaceae significantly increased. The richness of Marinifilaceae decreased from 6.64% to 3.15%.Marinifilaceae is a family of beneficial bacteria associated with the treatment and improvement of lipid metabolism disorders [19].Enterobacteriaceae is the Gram-negative family that can bring about various diseases in humans and animals, for example, osteomyelitis, urinary tract infections, and bacteremia. At the genus level (Figure 6c), the main Muribaculaceae (23.02%),Bacteroides (11.80%),Akkermansia (7.77%), and Odoribacter (6.42%) are in the control group.In the SMT group are the main Bacteroides (14.86%),Muribaculaceae (11.14%),Akkermansia (10.70%),Escherichia-Shigella (10.30%).Compared with the control group, the abundance of beneficial bacteria Muribaculaceae decreased significantly.Liang H et al. [20] conducted an experiment in which metformin-regulated intestinal microbiota reduced liver damage associated with sepsis.After the application of metformin to rat liver injury, it was found that the proportion of Muribaculaceae increased, inferring that Muribaculaceae may be related to the treatment and improvement of steps-related liver injury.Changes in Muribaculaceae richness can result in a loss of intestinal barrier completeness; therefore, it increases the risk of cancer.The regulation of the abundance of Muribaculaceae could provide the foundation for ameliorating inflammation and metabolic diseases such as obesity.Therefore, we assumed that the decrease in the abundance of Muribaculaceae in scallop mantle led to the loss of intestinal barrier integrity, resulting in intestinal barrier damage. In comparison with the CON group, the abundance of harmful bacteria Escherichia-Shigella increased significantly.Its increase causes the homeostasis of the intestinal flora to be disrupted, resulting in diarrhea and gastrointestinal diseases.The richness of Blautia decreased from 2.07% to 0.62% in the SMT group compared with the control group.Blautia is a genus of anaerobic bacteria with probiotic characteristics, widely present in the feces and intestines of mammals; it possesses the capacity to regulate host health, alleviate metabolic syndrome, and mitigate metabolic and inflammatory diseases.A reduction in its abundance may result in health complications such as inflammation and disrupted metabolism within the host.group, which implied that the mice had elevated fasting blood glucose levels, which could be due to any other form of acute disease or inflammation in the body.This could also severely impair glucose tolerance in the mice, leading to the development of glucose metabolism disorders akin to diabetes.The result of the intestinal flora imbalance caused by SMT is contrary to the conclusion of intestinal flora improvement in diabetic mice in previous studies.Therefore, we hypothesize that SMT disrupts the balance of intestinal flora and damages the structure and composition of intestinal flora, which we guess is one of the causes of glucose metabolism disorder in mice. The Tukey test (Figure 6d), as shown in the figure, revealed significant differences at the genus level between bacteria, control group, and scallop mantle group.Using LEfSe analysis, it was also found that there were significant differences in microbial communities among the groups (LDA Score > 2) (Figure 6e,f).At the phylum to genus level, it needs to be pointed out that there were 22 different bacteria in the SMT group as compared with only 9 different bacteria in the CON group.By comparison of the SMT group and control group, the bacteria groups are different.Phylum Proteobacteria, family Enterobacteriaceae, and Phylum campilobacterota, class Campylobacteria, genus Helicobacter significantly enriched in the scallop mantle group. The variation of taxonomic abundance was consistent with the results of the transition analysis.LEfSe analysis was used to screen for significantly different bacteria between groups.The results indicated that the differences in the control group were mainly nonpathogenic bacteria, while the differences in the SMT group were mainly virulence bacteria, for example, Escherichia-Shigella and Helicobacter. Analysis and Prediction of Intestinal Flora Metabolism The sample data from the control group and scallop mantle group were selected, examined, and predicted on the KEGG platform.The prediction functions of altered microbial communities are mainly involved in (Figure 7); metabolites were predicted, all of which were elevated in the scollop mantle group (p < 0.05), including d-Alanine, Vitamin B1, Vitamin B2, Sulfurous acid, L-Asparaginase, Glyoxylic acid, Ubiquinone, Citrate, Aminobenzoate, and Steroid hormone.Compared with the control group, the harmful Helicobacter abundance increased from 0.56% to 1.96% in the SMT group.Helicobacter causes varying degrees of acute pathology, ranging from gastroenteritis to chronic pathology, including inflammatory bowel disease and liver and gallbladder disease [21].Because of colitis or caecocolitis, proctosis is the primary clinical symptom when it comes to Helicobacter infection.Animals infected with Helicobacter may also develop diarrhea [22]. Park J. M. et al. [23] found that EGCG (epigallocatechin-3-gallate) not only improved various parameters of diabetes mice but also significantly increased the ratio of Firmicutes/Bacteroides at the phylum level.At the scientific level, EGCG increased the proportion of Christensenaceae and decreased the proportion of Enterobacteriaceae and Proteobacteria.After Huanglian Jiedu Decoction intervened in fat Zucker rats with diabetes, the proportion of Firmicutes/Bacteroides was reduced, and the proportion of Parabacteroides, Blautia, Akkermannia, and other bacteria in SCFA-producing and anti-inflammatory bacteria changed [24].In a randomized clinical trial, a specially designed Chinese herbal formula composed of eight kinds of herbs significantly changed the overall structure of intestinal flora and alleviated the symptoms of type 2 diabetes (T2DM) by enriching Faecalibacterium, Blautia, and other beneficial bacteria [25,26].Yasushi Hasegawa et al. found that glucose metabolism was significantly increased in the mantle group compared to the control group, which implied that the mice had elevated fasting blood glucose levels, which could be due to any other form of acute disease or inflammation in the body.This could also severely impair glucose tolerance in the mice, leading to the development of glucose metabolism disorders akin to diabetes.The result of the intestinal flora imbalance caused by SMT is contrary to the conclusion of intestinal flora improvement in diabetic mice in previous studies.Therefore, we hypothesize that SMT disrupts the balance of intestinal flora and damages the structure and composition of intestinal flora, which we guess is one of the causes of glucose metabolism disorder in mice. The Tukey test (Figure 6d), as shown in the figure, revealed significant differences at the genus level between bacteria, control group, and scallop mantle group.Using LEfSe analysis, it was also found that there were significant differences in microbial communities among the groups (LDA Score > 2) (Figure 6e,f).At the phylum to genus level, it needs to be pointed out that there were 22 different bacteria in the SMT group as compared with only 9 different bacteria in the CON group.By comparison of the SMT group and control group, the bacteria groups are different.Phylum Proteobacteria, family Enterobacteriaceae, and Phylum campilobacterota, class Campylobacteria, genus Helicobacter significantly enriched in the scallop mantle group. The variation of taxonomic abundance was consistent with the results of the transition analysis.LEfSe analysis was used to screen for significantly different bacteria between groups.The results indicated that the differences in the control group were mainly nonpathogenic bacteria, while the differences in the SMT group were mainly virulence bacteria, for example, Escherichia-Shigella and Helicobacter. Analysis and Prediction of Intestinal Flora Metabolism The sample data from the control group and scallop mantle group were selected, examined, and predicted on the KEGG platform.The prediction functions of altered microbial communities are mainly involved in (Figure 7); metabolites were predicted, all of which were elevated in the scollop mantle group (p < 0.05), including d-Alanine, Vitamin B1, Vitamin B2, Sulfurous acid, L-Asparaginase, Glyoxylic acid, Ubiquinone, Citrate, Aminobenzoate, and Steroid hormone.Glyoxylate and dicarboxylate metabolism encompasses a series of reactions in ing the utilization of glutaric or dicarboxylic acids.Proffitt Ceri et al. [27] discover augmented abundance of metabolic reactions related to glutaric and dicarboxylic particularly concerning tartaric acid, in individuals with obesity, type 2 diabetes, and Glyoxylate and dicarboxylate metabolism encompasses a series of reactions involving the utilization of glutaric or dicarboxylic acids.Proffitt Ceri et al. [27] discovered an augmented abundance of metabolic reactions related to glutaric and dicarboxylic acids, particularly concerning tartaric acid, in individuals with obesity, type 2 diabetes, and atherosclerosis. Tartaric acid serves as a metabolite within the glutaric and dicarboxylic acid metabolic pathway.It is commonly present in foods such as grapes, and upon ingestion, it can enter metabolism through either the tricarboxylic acid cycle or conversion into glycerol.Elevated plasma levels of glycerol have been positively linked to type 2 diabetes.Only 20% of ingested tartaric acid from food is excreted in urine, indicating that the remaining portion may be consumed by intestinal microbes.While human tissues are capable of metabolizing tartaric acid, it is predominantly metabolized by intestinal bacteria, highlighting the intimate association between tartaric acid and the intestinal microbiome. The glyoxylate cycle has the capability to convert fatty acids into glucose, resulting in insulin resistance.During metabolic disorders, the genome-scale metabolic models revealed heightened acetate production, which could be associated with glyoxylate and dicarboxylate metabolisms since the pathways decompose acetate and amino acids in microorganisms for energy generation.There was an enhancement in the arginine and proline metabolism.The enhancement was further correlated with elevated consumption of glutamic acid, which is necessary for the synthesis of arginine and proline.The tartrate undergoes fermentation in the colon.Elevated levels of acetic acid can enhance pancreatic B-cell activity and stimulate insulin secretion, leading to eating habits and obesity that ultimately impact host metabolism.These results suggest that the SMT could disrupt the balance of intestinal microflora and, thereby, cause a disturbance of glucose metabolism.The metabolic pathway of intestinal flora elucidates the disturbance of glucose metabolism in mice caused by scallop mantle toxin found in Yasushi Hasegawa et al. Conclusions This paper investigates the presence of a new type of shellfish toxin on scallop mantle in Dalian, Liaoning Province, China, and the effects of this toxin on the liver and kidney.Additionally, it aims to examine the impact of this new type of shellfish toxin on intestinal flora and intestinal injury.Compared with the CON group, the SMT group mice exhibited decreased body weight and food intake; AST, ALT, γ-GT, TG, TC, TBA, BUN, CRE, and other serum indexes in liver and kidney were increased; the histopathology of liver, kidney, and intestinal tract was seriously damaged; intestinal villi length was reduced; and the concentrations of inflammatory serum indexes such as IL6, TNF-α, and LPS were increased.This study explored how a certain inflammatory response occurs in mice bodies.Regarding the intestinal flora of mice analysis, we found that the beneficial bacterial flora, such as Muribaculaceae and Blautia, decreased, while the harmful bacterial flora, such as Escherichia-Shigella and Helicobacter, increased.In this paper, the toxic effects of SMT on intestinal flora were explained from the perspective of intestinal flora.Specifically, SMT caused the disturbance of intestinal flora composition, the decrease in beneficial bacteria, and the increase in harmful bacteria caused intestinal barrier damage and inflammation, ultimately causing toxicity.We are leaning towards believing that there may be a new type of shellfish poison in the mantle of Dalian scallop, and its toxicological mechanism may be similar to that of the mantle of Japan.Continuous consumption of scallop mantle would cause varying degrees of damage to mice livers, kidneys, and intestines, as well as destroy the balance of intestinal flora, eventually resulting in disease. By integrating the findings of this study with the research progress of Hasegawa's experimental team, it is found that both mice and rats will suffer certain toxicological damage after consuming the feed added to the scallop mantle, and it is concluded that the scallop mantle contains toxins, which will lead to body death by increasing harmful bacteria in the intestinal flora and reducing beneficial bacteria.Our investigation into this new shellfish toxin is still at a preliminary stage and faces great limitations.For example, we lack knowledge regarding the specific components of this toxin, thereby hindering our ability to develop corresponding antibodies to deal with and treat the damage caused by this toxin.Our future research direction is to establish the separation and purification method of the target proteotoxin, determine the physicochemical properties and molecular structure of the target proteotoxin, and master the rapid and reasonable detection method of the target proteotoxin to elucidate the mechanism of protein toxin formation in scallop skirt.The key factors and techniques to weaken or detoxify the protein toxin through an investigation into the regulation of protein toxin formation so as to solve the safety problem of scallop skirts and promote the utilization of high-value scallop skirts.We seek to standardize the production of processed food for scallop skirt to solve food safety problems. Our objective is to study the growth conditions of Pecten yessoensis and the law of the formation of new shellfish poison, determine whether the new shellfish poison is formed naturally or accumulated by filtering the toxic algae in the ocean, and clarify the difference and relationship between the formation process of new shellfish toxin and the known shellfish poison, as well as the relationship between its formation and the sea area and shellfish species.We also seek to prepare polyclonal antibodies and gene cloning; explore rapid and economical detection techniques; regulate the formation mechanism of new shellfish toxicity by environmental factors, culture methods, sea area, and shellfish species; and explore the key factors and techniques for reducing or removing the virus.In order to further clarify the mechanism of action of the toxin and its impact on human health, we may propose ways to reduce the consumption of shrimp scallop skirt to avoid the harm caused by the new shellfish toxicity to humans so as to not only strengthen the supervision of food safety but also avoid the harm of foodborne diseases. Collection and Preparation Scallop Mantle Toxin Patinopecten yessoensis scallop samples were purchased from the aquatic products market in Jinzhou, Liaoning Province, China.After that, the seawater was drained, and the scallop mantle was taken out, washed, frozen, freeze-dried, and then ground into powder so that it was ready to add to the mouse diet. Animal Treatment Eight-week-old male ICR mice (25-30 g) were purchased from Jinzhou Medical University, Liaoning Province, China.These mice were kept in a circadian cycle with a temperature of 23 ± 2 • C, a humidity of 50 ± 10%, and 12 h of light and 12 h of darkness [28].After a week of adaptation, the experiment began.All animal experiments followed the guidelines for nursing care and experimental animals of Jinzhou Medical University. All experimental procedures in this study were in accordance with the guidelines for the use of animal care at the Chinese Academy of Health and have been approved by the Animal Care Use Committee of Animal Center, Jingzhou Medical University, Liaoning Province, China [29], Approval date: 25 October 2023.Use license number: SYXK2022-0006, license number: SCXK2019-0003. These mice were randomly divided into two (groups 10 mice per group): control group and scallop Mantle group.Control group ate a normal diet; scallop Mantle group ate diets supplemented with scallop mantle.We ensured adequate water supply during breeding, changed the bedding once a week, and weighed once a week.The status of the mice was observed and the amount of remaining feed was recorded [30].The daily feed is shown in Table 2. After 4 weeks of starting the feeding and nesting, the mice were anesthetized, and white adipose tissue such as the groin, epididymis, liver, kidney, colon, stomach, and cecum were rapidly removed and weighed.We took heart blood and centrifuged it at 5000× g for 10 min.We collected two fecal particles excreted by each mouse in an empty sterile cage and store them at −80 • C for immediate collection until microbiological analysis is conducted. Biochemical Assays of the Serum Serum concentrations of total cholesterol (TC), triglyceride (TG) [31], total bile acids (TBA), gamma-glutamyl transpeptidase (γGTP), aspartate aminotransferase (AST), alanine aminotransferase (ALT) [32], creatinine (CRE), blood urea nitrogen (BUN) in liver were measured with the assay kits (Nanjing Jiancheng Institute of Biotechnology, Nanjing, China).All reagents were prepared in accordance with the steps of the kit instruction manual and measured in the enzyme-label instrument and ultraviolet spectrophotometer.The reagents used in this paper were purchased from Nanjing Jiancheng Bioengineering Institute, Nanjing, China.Mouse TNF-α ELISA KIT, Mouse IL-6 ELISA KIT, Mouse LPS ELISA KIT, and the reagents used in this paper were purchased from SHANGHAI JINGKANG BIOENGINEERING Co., Ltd., Shanghai, China.The measurement data were recorded and then analyzed using software. Histology Examination After blood extraction, mice were euthanized; kidneys, liver, and intestines were taken in turn; weighed and recorded; and cleaned with PBS solution.Kidney, liver, and intestine tissues were fixed with 4% neuro buffer formaldehyde and inserted parafn according to the procedure.Parafn-embedded kidney, liver, and intestine tissues were cut into slices.Slices were dyed with hematoxylin eosin (HE).We observed the morphology of ileal villi under a microscope.We used ImageJ (K-Viewer-1.7.0.29-x86_64) software to measure the height of villi and the depth of crypts. Inflammatory Cytokines Examination The contents of interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), and Mouse Lipopolysaccharides (LPS) in serum were determined by enzyme-linked immunosorbent assay (ELISA).The serum of mice was collected by ocular blood sampling, and then ELISA was detected in serum at 4 • C centrifuged at 3000× g for 10 min according to the instructions of ELISA kit. 16S Sequencing and Analysis According to the manufacturer's instructions, we used GHFDE100 (Zhejiang Hangzhou Equipment Preparation 20190952, Hangzhou, China) DNA Separation Kit (GUHE Laboratories, Hangzhou, China) to extract total bacterial genomic DNA samples from all samples.The quantity and quality of extracted DNAs were measured using a NanoDrop ND-1000 spectros Photometer (Thermo Fisher Scientific, Waltham, MA, USA) and agarose gel electrophoresis, respectively.PCR amplification of the bacterial 16S rRNA genes V4 region was performed using the forward primer 515F and the reverse primer 806R. Figure 1 . Figure 1.Weekly changes in body weight and food intake.(a) is the weekly weight change of CON group and SMT group; (b) is the weekly residual feed changes of CON group and SMT group. Figure 1 . Figure 1.Weekly changes in body weight and food intake.(a) is the weekly weight change of CON group and SMT group; (b) is the weekly residual feed changes of CON group and SMT group. Figure 2 . Figure 2. Liver serum markers and histopathology.(a) is the liver histopathological picture of CON group; (b) is the liver histopathological picture of SMT group; (c-h) are the serum activity pictures of AST, ALT, TG, TC, γ-GTP and TBA corresponding to CON and SMT groups, respectively. Figure 2 . Figure 2. Liver serum markers and histopathology.(a) is the liver histopathological picture of CON group; (b) is the liver histopathological picture of SMT group; (c-h) are the serum activity pictures of AST, ALT, TG, TC, γ-GTP and TBA corresponding to CON and SMT groups, respectively. Figure 3 . Figure 3. Kidney serum marker and histopathology.(a) is the kidney histopathological picture of CON group, (b) is the kidney histopathological picture of SMT group, (c,d) are the serum activity pictures of CRE and BUN corresponding to CON and SMT groups, respectively. Figure 3 . Figure 3. Kidney serum marker and histopathology.(a) is the kidney histopathological picture of CON group, (b) is the kidney histopathological picture of SMT group, (c,d) are the serum activity pictures of CRE and BUN corresponding to CON and SMT groups, respectively. Figure 4 . Figure 4. Effect of SMT on colonic histopathological changes and inflammation in mice.(a) is the intestinal tract histopathological picture of CON group, (b) is the kidney histopathological picture of SMT group, (c) is the intestinal villus length of CON group and SMT group, (d−f) are the serum activity pictures of LPS, IL-6, and TNF-α corresponding to CON and SMT groups, respectively. Figure 4 . Figure 4. Effect of SMT on colonic histopathological changes and inflammation in mice.(a) is the intestinal tract histopathological picture of CON group, (b) is the kidney histopathological picture of SMT group, (c) is the intestinal villus length of CON group and SMT group, (d-f) are the serum activity pictures of LPS, IL-6, and TNF-α corresponding to CON and SMT groups, respectively. Figure 5 . Figure 5. Effects of SMT on intestinal flora diversity in mice.Intestinal flora alpha diversity parameters: observed (a) shannon and (b) chao1, (c) Venn diagram based on OTU levels, (d) PCA plot of intestinal flora and b−diversity difference diagram of intestinal flora in two groups. Figure 5 . Figure 5. Effects of SMT on intestinal flora diversity in mice.Intestinal flora alpha diversity parameters: observed (a) shannon and (b) chao1, (c) Venn diagram based on OTU levels, (d) PCA plot of intestinal flora and b-diversity difference diagram of intestinal flora in two groups. Figure 6 . Figure 6.Intestinal flora richness of mice, Tukey test, and LEfSe determinations.(a) Species composition histogram at phylum level, (b) species composition histogram at family level.(c) Species composition histogram at genus level, (d) Tukey test-different species between groups, (e,f) represent Histograms and Cladogram of enriched taxa based on LEfSe determinations, respectively. Figure 6 . Figure 6.Intestinal flora richness of mice, Tukey test, and LEfSe determinations.(a) Species composition histogram at phylum level, (b) species composition histogram at family level.(c) Species composition histogram at genus level, (d) Tukey test-different species between groups, (e,f) represent Histograms and Cladogram of enriched taxa based on LEfSe determinations, respectively. Table 2 . Composition of control diet and mantle diet.
9,748.2
2024-05-27T00:00:00.000
[ "Environmental Science", "Medicine", "Biology" ]
The Idea and Key Technical Prospect on Integration between Underground Reservoir and Surface Water System The problem of global water security is extremely serious due to the increasingly poor and uneven distribution of water resources and severe water pollution in some regions. Water security is a major long-term strategic demand of China, involving water resource supply, the prevention and control of flood, drought and water pollution, etc. In order to solve the prominent problems of global surface water shortage and the groundwater level drops caused by the over exploited of the traditional groundwater reservoir, the underground reservoir is proposed under the guidance of national strategy “Deep Into the Earth”. The protection and early warning of water security would be better through the interaction between underground reservoir and surface water system. This paper provides solutions for water security issues. In which, the preliminary concept of underground reservoir is defined, and the theoretical system of water security for underground reservoirs and surface water is put forward. Moreover, several key technologies are also proposed, including the distributed rainfall-runoff and water environment simulation with high precision, site selection and design of underground reservoir, the early warning and protection of water environment in underground reservoir, the joint operation of water quantity and quality between underground reservoir and surface water system, etc. Introduction Water is the source of life, the basis of production and ecology and the fundamental resource on which human beings depend for survival. The fresh water that is available to humans accounts for 2.5% of global water, but only 0.26% of which can be directly utilized after excluding the hard-to-exploit water in the Antarctic ice sheets, Greenland ice sheets and some mountain glaciers. Worse, the global freshwater is unevenly distributed, and 60% of which belongs to the top nine countries(Brazil, Russia, Canada, China, the United States, Indonesia, India, Colombia, and Congo), while 80 countries and regions with 1.5 billion people are short of fresh water [1]. Moreover, with the development of social economy, the problem of water environmental pollution is increasingly aggravated. More than 420 billion m 3 of sewage are discharged into rivers, lakes and seas every year, polluting 5.5 trillion m 3 of fresh water which is equivalent to 14% of global runoff [2]. It not only aggravates the shortage of available irrigation water to become an important constraint to food production, but also directly affects the safety of water drinking, food production with resulting in huge economic losses. According to the "United Nations World Water Development Report 2018" [3], the global demand for water resources is growing at a rate of 1% per year. About 5 billion people will live in water shortage areas by 2050 if there is no action been taken. With the constant extension and expansion of definition, "water safety" has been replaced by "water security". Generally, "safety" refers to the safe of human health and production technology activities, covering the natural attributes of "water safety", such as water resource safety, water environment safety, etc [4]. However, "security" contains social and political security issues that can guarantee its stability and economic development [5], and it covers the social attributes of "water security". Thus, "security" that has comprehensive implications has been used by a large number of scholars [6][7] [8], such as the internationally recognized "food security". In 2009, UNESCO(United Nations Educational Scientific and Cultural Organization) defined water security as the water resources that can ensure living and development of humans in quantity and quality, which can maintain the sustainable health of human and ecological environment, protect lives and properties from water disasters [8]. And the implication of "water security" has been extended to the resources, environment, ecology, society, politics and economy. Karen Bakker [9] discussed the challenges and opportunities of water security research, indicating that the problem of water security had received a great attention in the world. In recent years, surface water conservancy projects have been built all over the world to solve the water security problems including water resource shortage and uneven distribution in time and space [10], such as the California Water Diversion Project, China South-to-North Water Diversion Project [11] and China Three Gorges Project [12]. However, the problems of water security cannot be completely solved by relying only on surface water systems. As early as 1998, Qian [13] predicted the prospect of underground space development, and considered the planning of which in recent years [14] [15]. Chen [16] proposed the strategic concept of building a large-scale deep earth science and engineering laboratory in China, and showed that the need to develop underground space is becoming more and more urgent. Xie [17] also presented several ideas of deep earth science in hydraulic engineering. At the China National Science and Technology Innovation Conference in 2016, it was pointed out that "Deep Into the Earth" is a problem of science and technology strategy that must be solved in China [18], and water is the first factor for deep earth space utilization and deep resource development [19]. Therefore, the construction of the underground reservoir integrated with the surface water system would gradually form the early warning theoretical system of water security, which is a new way to solve the water security problems. Development trend of underground reservoir At the beginning of the 20th century, the practice of artificial groundwater recharge has been conducted in Japan, America, Soviet Union, Netherlands, France and other countries. In the 1950s, the freshwater storage experiments in the salt water layer were started in America [20] and the artificial recharge of groundwater was carried out by using natural aeolian dunes In Netherlands [21]. After the 1970s, ASR(Aquifer Storage and Recovery) was gradually formed in America, and until July 2002, there were 56 ASR in operation and more than 100 systems has been built [22]. Now, the ASR project has become an important part of CERP(the Comprehensive Wetland Restoration Plan). In order to solve the problems of water shortage and seawater intrusion, an artificial groundwater reservoir(the catchment area is 0.6 km 2 and the total storage capacity is 9000 m 3 ) was built in 1972 in Nagasaki, Japan [23], which is the first one in the world. And the groundwater reservoirs in Miyako and Sunakawa of Okinawa [24] were successively constructed. In 1975, the Nangong Groundwater Reservoir(the total storage capacity is 480 million m 3 )[25], the first one in China, was built in Hebei Province. And then, the Jiahe Groundwater Reservoir and the Dagu River Groundwater Reservoir in Shandong [26], the Maguan Groundwater Reservoir in Guizhou [27] were gradually built. As an effective, economic and environmental friendly underground development project, the groundwater reservoir has been developed rapidly, and it is known abroad as "aquifer recharge" or "aquifer storage and recovery". Most scholars believe that the groundwater reservoir is a groundwater development project that uses the natural water storage space within the earth's crust to store water resources [28]. However, the underground reservoir proposed in this paper is different from the traditional groundwater reservoir. It is preliminarily defined as the reservoir that built artificially at a ACHE2019 IOP Conf. Series: Materials Science and Engineering 794 (2020) 012003 IOP Publishing doi:10.1088/1757-899X/794/1/012003 3 certain depth underground, with surface water dispatching or natural recharge as the main water source. And the project which is similar to this concept is the "Outside Drainage System of the Capital Area" that was built between 1992 and 2006 in Tokyo [29], which is the most advanced sewer drainage system in the world. The Tokyo drainage system is located about 50 meters underground along the Sixteenth National Highway in Kasugabe City, and the total length of which is 6.4 kilometers. There are five giant shafts with a height of 65m and a width of 32m, and an artificial underground reservoir with a length of 177m, a width of 77m and a height of about 20m, which is supported by 59 large columns with a height of 18m and a weight of 500 tons [30]. The underground reservoir has many advantages, including no land occupation, no immigration, no damage to the ecological environment, no risk of flooding and dam failure, the reduction of water loss caused by evaporation and the provision of foundation for deep water networks. Especially, it can avoid the problem of groundwater over-exploitation and effectively solve the problem of uneven spatial and temporal distribution of water resources. The underground reservoir can be used as a source of water supply to solve the problem of water shortage, which improves the existing traditional mode of water resources development, flood prevention and mitigation, and water environmental protection. And through the comprehensive operation with surface water, it can improve the utilization of water resources and protect the surface water environment. In the flood season, it can also be used as a flood discharging pool to achieve flood control and disaster mitigation. Furthermore, the underground reservoir group, which is constituted by a number of independent or connected underground reservoirs based on regional hydrometeorology, topographical geology, water supply and water environmental protection requirements, is linked and coordinated with the surface water system to solve the water security issues jointly. The prospect on theory and key technology of integration between underground reservoir and surface water system The study of underground reservoir and its linkage with surface water system needs interdisciplinary integration. This research has the dual characteristics of theoretical science and technical science. The basic theories involve mathematics, physics, geography, geology, ecology, environmental science, material science, control science, management science, etc, and the basic technologies include measurement technology, control technology, computer technology, information technology, etc. The theoretical study of the integration between underground reservoir and surface water system mainly involves three aspects: (1) The basic theory research of underground reservoir and underground reservoir group, such as the precisely definition of underground reservoir, the main function of underground reservoir, and the construction area that need to be studied. (2) The design theory of underground reservoir. The design concept, design principles, design standards and design methods should be studied from the perspectives of hydrometeorology, water environment and water security. (3) The scientific basis and method of the integration between the underground reservoir and surface water system in water quantity and quality, and the early warning system of water security by integrating underground reservoirs with surface water system should be formed eventually. 3.1Key technology of distributed rainfall-runoff-water environment simulation and early warning with high precision The distributed meteorological, hydrological and water environment integrated simulation model is developed to provide high-precision hydrological and water environment forecast information for the coordinated and optimized operation of underground reservoirs and the linkage with surface water system.. 3.2Key technology of site selection, storage capacity design and early warning of underground reservoir Considering the basin composition characteristics, rainfall, water quantity, topography, water resource demand and allocation and water environment endowment, the site selection of underground reservoir should be carried out by using remote sensing and GIS technology, and many analysis methods including fuzzy comprehensive analysis, analytic hierarchy process, etc. To study the system and method of early warning index of underground reservoir, the design of capacity, elevation and water function should be based on the simulation results of regional water resources and water environment conditions by using integrated forecasting model of meteorology, hydrology and water environment. 3.3Key technology of early warning and protection of water environment simulation for underground reservoir Underground reservoirs are in the special environmental conditions, such as no sunlight, low velocity of flow and wind, long storage time of water body with weak self-purification ability. Thus, the environmental hydraulics, microbiology and other interdisciplinary scientific theories and technical methods need to be applied to research the water quality, evolution rule, early warning index system as well as the water environmental protection technology of the underground reservoirs. Conclusions There is a shortage of fresh water resources available for humans, although the amount of water on the surface of the earth is large. In recent years, with the rapid development of global economy and population, water security has been seriously challenged. However, the problems of water environment security, water resources security, water ecological security and water disaster avoidance security cannot be effectively solved by relying only on the surface water system. And the traditional ground reservoir cannot fully ensure the region water security due to the geographical limitations, the over-exploitation of groundwater and so on. In order to realize the efficient utilization of water resources and to solve the problem of water security, the idea of improving the water security and early warning system by using deep earth space and establishing the integration system of underground reservoirs and surface water is put forward. In this paper, the underground reservoir is preliminarily defined, and three scientific theoretical problems are summarized: the basic theory of underground reservoir, the design theory of underground reservoir, the integration theory of underground reservoir and surface water system in terms of water quantity and quality. In addition, several key technologies are proposed including the distributed high precision rainfall-runoff-water environment simulation, site selection and design of underground reservoir, the early warning and protection of water environment in underground reservoir, the joint operation of underground reservoir and surface water system, etc. In a word, this paper provides solutions for water security problems, and it is expected to effectively solve the uneven spatial and temporal distribution of water resources, and to promote the development and utilization of deep earth space.
3,115
2020-03-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Current Compensation in Grid-Connected VSCs using Advanced Fuzzy Logic-based Fluffy-Built SVPWM Switching A main focus in microgrids is the power quality issue. The used renewable sources fluctuate and this fluctuation has to be suppressed by designing a control variable to nullify the circulating current caused by voltage fluctuations and deviations. The switching losses across power electronic switches, harmonics, and circulating current are the issues that we discuss in this article. The proposed intelligent controller is an interface between a voltage-sourced converter and a utility grid that affords default switching patterns with less switching loss, less current harmonic content, and overcurrent protection, and is capable of handling the nonlinearities and uncertainties in the grid system. The interfaced controller needs to be synchronized to a utility grid to ensure that the grid–lattice network can be fine-tuned in order to inject/absorb the prominent complex reactive energy to/from the utility grid so as to maintain the variable power factor at unity, which, in turn, will improve the system’s overall efficiency for all connected nonlinear loads. The intelligent controller for stabilizing a smart grid is developed by implementing a fuzzy-built advance control configuration to achieve a faster dynamic response and a more suitable direct current link performance. The innovation in this study is the design of fuzzy-based space vector pulse width modulation controller that exploits the hysteresis current control and current compensation in a grid-connected voltage source converter. By using the proposed scheme, a current compensation strategy is proposed along with an advanced modulation controller to utilize the DC link voltage of a voltage source converter. To demonstrate the effectiveness of the proposed control scheme, offline digital time-domain simulations were carried out in MATLAB/Simulink, and the simulated results were verified using the experimental setup to prove the effectiveness, authenticity, and accuracy of the proposed method. Introduction In the present scenario, due to the depletion of conventional sources, the use of renewable energy resources plays a vital role in distributed generation. The amalgamation of various renewable and Vector-based modulation and regulation techniques use adaptive and nonlinear management strategies [23]. VSCs offer bidirectional power flow, constant DC voltage, and the elimination of harmonics. VSCs also have complementary passive filters for a variety of power quality issues [24,25]. The concept of vector management uses fuzzy logic management for current and DC voltage control in VSCs [26]. Z-Source Voltage Source Converter The Z-Source VSC control technique used in µ grid is the most flexible and authenticated control technique and is more sensitive than the previously reported control techniques with proper AC voltage regulation. The Z Source over a VSC utilizes an interesting impedance system with a converter principle circuit and a power source. A two-port network structure with a split inductor needs more capacitors over the X-shape network. As shown in Figure 1, this is an extended impedance source (Z source ) coupled to a converter circuit. Energies 2020, 13, x FOR PEER REVIEW 3 of 18 voltage hotspot converter is presented that exploits Tom's perusing fuzzy-built current SVPWM PI controller in a grid-connected VSC 1922. Vector-based modulation and regulation techniques use adaptive and nonlinear management strategies 23. VSCs offer bidirectional power flow, constant DC voltage, and the elimination of harmonics. VSCs also have complementary passive filters for a variety of power quality issues 24,25. The concept of vector management uses fuzzy logic management for current and DC voltage control in VSCs 26. Z-Source Voltage Source Converter The Z-Source VSC control technique used in µgrid is the most flexible and authenticated control technique and is more sensitive than the previously reported control techniques with proper AC voltage regulation. The ZSource over a VSC utilizes an interesting impedance system with a converter principle circuit and a power source. A two-port network structure with a split inductor needs more capacitors over the X-shape network. As shown in Figure 1, this is an extended impedance source (Zsource) coupled to a converter circuit. Figure 1. Schematic diagram of a voltage source converter (VSC). The ZSource impedance network connected to a VSC can provide a source of electric power to supply the grid in case of a reactive load at the grid side or absorb the excess complex reactive power from the AC grid towards the ZSource to maintain the VPF at unity. The DC bus voltages are associated with those of an energy source. The three-phase AC voltage is provided as an RMS value as stated in Equation (1). The voltages VA, VB, and VC are the RMS values of the voltages and are represented as VR(RMS), VY(RMS), and VB(RMS), respectively. (1) The direction of the current flow in the VSC is represented as per Equation (2). If VVSC < Vac, the entire network structure works with respect to the leading current (the connected nonlinear load contains the capacitive source of power). The VSC absorbs the excess amount of complex reactive power at the load and the direction of the current flow is from the load to the source. If the network structure is working under a lagging load, VVSC > Vac, some of the power is supplied from the VSC to the grid so as to maintain the VPF at unity. In this case, the VSC functions as a capacitor and the VSC supplies the capacitor-sensitive control power to the AC utility grid as shown in Figure 2. The Z Source impedance network connected to a VSC can provide a source of electric power to supply the grid in case of a reactive load at the grid side or absorb the excess complex reactive power from the AC grid towards the Z Source to maintain the VPF at unity. The DC bus voltages are associated with those of an energy source. The three-phase AC voltage is provided as an RMS value as stated in Equation (1). The voltages V A , V B , and V C are the RMS values of the voltages and are represented as V R(RMS) , V Y(RMS) , and V B(RMS) , respectively. The direction of the current flow in the VSC is represented as per Equation (2). If V VSC < V ac , the entire network structure works with respect to the leading current (the connected nonlinear load contains the capacitive source of power). The VSC absorbs the excess amount of complex reactive power at the load and the direction of the current flow is from the load to the source. If the network structure is working under a lagging load, V VSC > V ac , some of the power is supplied from the VSC to the grid so as to maintain the VPF at unity. In this case, the VSC functions as a capacitor and the VSC supplies the capacitor-sensitive control power to the AC utility grid as shown in Figure 2. The ZSource VSC shown in Figure 2 contains an LC input and output filter with common ground. The VSC, when switched under various pulses, encounters switching ripples, and these ripples in the input current are reduced with the use of the LC filter constructed under the ZSource. A reduced snubber value is added across each switching device of the VSC to limit voltage overshoot. The snubber used across the switches provides commutation paths during dead times. An LC output filter in ZSource is required to reduce the large harmonic component that occurs at the output load. The designed ZSource is smaller in size when compared with other ZSource topologies. Space Vector Representation The space vector representation of the three-phase voltages Va(t), Vb(t), and Vc(t) with a space circulation of about 120° is given by Equation (3): where (4) (5) . (6) Substituting Equation (6) into Equation (3), we define the orthogonal system voltages as given in Equation (7) and Equation (8). The orthogonal system voltages  V and  V are calculated from the three-phase system voltages by using Clarke's/Park's transformation. (7) The Z Source VSC shown in Figure 2 contains an LC input and output filter with common ground. The VSC, when switched under various pulses, encounters switching ripples, and these ripples in the input current are reduced with the use of the LC filter constructed under the Z Source. A reduced snubber value is added across each switching device of the VSC to limit voltage overshoot. The snubber used across the switches provides commutation paths during dead times. An LC output filter in Z Source is required to reduce the large harmonic component that occurs at the output load. The designed Z Source is smaller in size when compared with other Z Source topologies. Space Vector Representation The space vector representation of the three-phase voltages V a (t), V b (t), and V c (t) with a space circulation of about 120 • is given by Equation (3): where a = e jωt = e j2π/3 = cos 2π 3 + j sin 2π 3 (4) a = e j2ωt = e j4π/3 = cos 4π 3 + j sin 4π 3 (5) Substituting Equation (6) into Equation (3), we define the orthogonal system voltages as given in Equation (7) and Equation (8). The orthogonal system voltages V α and V β are calculated from the three-phase system voltages by using Clarke's/Park's transformation. of 16 The angle between V α and V β is represented by θ and is calculated as shown in Equation (9). The above-mentioned relations provide us with the idea of transforming stationary 3Ø voltages into 2Ø voltages, determined along the orthogonal plane in the stationary reference frame (STRF) (αβ coordinates), by using Clarke's transformation as shown in Figure 3. V A , V B , V C are the 3Ø Voltages with respect to the stationary reference frame, which was mapped on to a 2Ø orthogonal αβ coordinate. The three 3Ø sinusoidal voltages are considered to be a vector Energies 2020, 13, x FOR PEER REVIEW 5 of 18 (9) The above-mentioned relations provide us with the idea of transforming stationary voltages into voltages, determined along the orthogonal plane in the stationary reference frame (STRF) (αβ coordinates), by using Clarke's transformation as shown in Figure 3. are the with respect to the stationary reference frame, which was mapped on to a orthogonal αβ coordinate. The three sinusoidal voltages are considered to be a vector . Figure 3. Determination of space vector pulse width modulation (SVPWM) sectors. Each stage of the voltage vector needs a stage shift of about 120° to be different from the other components, as given in Equation (10) and Equation (11). where S is a transformation matrix, which is defined in Equation (12): . (12) We substitute Equation (12) and Equation (10) into Equation (11) to obtain the orthogonal voltages in a three-axis system (the α component, the β component, and the no-load components) as given in Equation (13). Each stage of the voltage vector needs a stage shift of about 120 • to be different from the other components, as given in Equation (10) and Equation (11). where S is a transformation matrix, which is defined in Equation (12): We substitute Equation (12) and Equation (10) into Equation (11) to obtain the orthogonal voltages in a three-axis system (the α component, the β component, and the no-load components) as given in Equation (13). Energies 2020, 13, 1259 6 of 16 This Ø VSC needs eight switching states, which are produced by eight space vectors using the SVPWM vector V k and are given by Equation (14). This equation produces six active states that produce a non-zero voltage vector. The two non-active states produce a zero-voltage vector. The Calculation of the Duty Cycle in SVPWM To determine the on and off moments of the switches T on and T off , the sampling period T s has to be determined. The signal V ref needs to be turned on throughout the period T s , and, thus, the principal voltage vectors V 1 and V 2 need to be turned on during T 1 and T 2 , respectively. It proceeds similarly for each bit-vector that is accessible in the state space. The voltage vectors are given by Equation (15): Substituting Equation (15) into Equation (16), we have Rearranging Equation (17), we have Consequently, SVPWM Fuzzy Controller The nonlinear SVPWM controller does not provide a suitable current control scheme in a VSC-based microgrid. Hence, in the proposed scheme, a fuzzy-logic-based controller is presented to realize the control objectives so that SVPWM will provide complete control over the converter. The fuzzy-built SVPWM controller is also adaptable, which provides it with desirable performance for VSCs in microgrids with nonlinear loads. A block diagram of a Z-source VSC with a fuzzy-logic-based SVPWM controller is shown in Figure 4. This controller provides an acceptable decoupled current slip and compensates for it to realize programmed advanced exchange control over the converter's output for a fitting run through a delay. The VSC's output is associated with the utility grid and the three-phase grid voltages given by Equation (21). Energies 2020, 13, 1259 The grid voltage is represented in the synchronous reference frame (SYRF) (dq coordinates) by using Park's transformation as given in Equation (22). In the STRF, we have Energies 2020, 13, x FOR PEER REVIEW 7 of 18 SVPWM Fuzzy Controller The nonlinear SVPWM controller does not provide a suitable current control scheme in a VSC-based microgrid. Hence, in the proposed scheme, a fuzzy-logic-based controller is presented to realize the control objectives so that SVPWM will provide complete control over the converter. The fuzzy-built SVPWM controller is also adaptable, which provides it with desirable performance for VSCs in microgrids with nonlinear loads. A block diagram of a Z-source VSC with a fuzzy-logic-based SVPWM controller is shown in Figure 4. This controller provides an acceptable decoupled current slip and compensates for it to realize programmed advanced exchange control over the converter's output for a fitting run through a delay. The VSC's output is associated with the utility grid and the three-phase grid voltages given by Equation (21). Fuzzy Controller for Current Error Compensation On account of the drawbacks of the SVPWM controller, the SVPWM controller with fuzzy logic is implemented in a grid-tied VSC. As illustrated in Figures 5 and 6, the fuzzy controller includes two inputs, namely current and current error, and one output. The fuzzy current compensation controller (FC 3 ) examines the input current and current error and transforms the calculated dynamic error into legitimate qualities in the range of −1 to +1 by using the fuzzy rules stated in Table 1 for one cycle of operation. Table 1 lists the framed Fuzzy Rules by which the current compensation output and switching sequence are obtained. The control decision topology of the proposed FLC-based scheme is illustrated in Figure 7. S.No Fuzzy Conditions If (Current is PM) and (Current_Error is PB) then (Switching_Seq_O/P is PB) The normalized current input and the current error section are designated with a triangular membership function. Figure 7 illustrates the fuzzy control structure's output to compensate for the current to the grid structure. A fuzzy rule is specified for the current parameters that are connected under the grid based on a current normalized function. The membership function for the current compensation network is shown in Figure 8. The normalized current input and the current error section are designated with a triangular membership function. Figure 7 illustrates the fuzzy control structure's output to compensate for the current to the grid structure. A fuzzy rule is specified for the current parameters that are connected under the grid based on a current normalized function. The membership function for the current compensation network is shown in Figure 8. Simulation Results The simulation test results are presented in this section to show the effectiveness of the proposed fuzzy-logic-based SVPWM controller. Offline digital time-domain simulations were carried out in MATLAB/SIMULINK and experimental tests were conducted to verify the results of the simulations. Information about the three-phase stationary reference outline was transformed into dq coordinates by using Park's transformation. The reference voltage is shown in Figure 9. Simulation Results The simulation test results are presented in this section to show the effectiveness of the proposed fuzzy-logic-based SVPWM controller. Offline digital time-domain simulations were carried out in MATLAB/SIMULINK and experimental tests were conducted to verify the results of the simulations. Information about the three-phase stationary reference outline was transformed into dq coordinates by using Park's transformation. The reference voltage is shown in Figure 9. Simulation Results The simulation test results are presented in this section to show the effectiveness of the proposed fuzzy-logic-based SVPWM controller. Offline digital time-domain simulations were carried out in MATLAB/SIMULINK and experimental tests were conducted to verify the results of the simulations. Information about the three-phase stationary reference outline was transformed into dq coordinates by using Park's transformation. The reference voltage is shown in Figure 9. Measurements of the complex real and reactive power on the nonlinear load side were simulated and are shown in Figure 10. On the load side, the measured real power was 50 KW, and the measured reactive power was 25 W. On observing the real and reactive power of the grid system, it was clear that most of the power at the output was active power, the observed voltage and current were in phase, as shown in Figure 11, and, hence, the power factor at the load side was almost unity. Measurements of the complex real and reactive power on the nonlinear load side were simulated and are shown in Figure 10. On the load side, the measured real power was 50 KW, and the measured reactive power was 25 W. On observing the real and reactive power of the grid system, it was clear that most of the power at the output was active power, the observed voltage and current were in phase, as shown in Figure 11, and, hence, the power factor at the load side was almost unity. The implementation of SVPWM defined the voltage vector and the reference voltage vector as traveling between the axis. The sector determination of the vector voltages is shown in Figure 12. The sectors of the reference voltages are likely to follow the determined sectors shown below. Figure 13 shows the THD results from the VSC using the SVPWM controller. The analysis was carried out using a fast Fourier transform (FFT) with the Power graphical user interface (GUI). The FFT analysis of the VSC using the SVPWM controller was performed for one cycle among 50 cycles at a converter voltage of 323.9 V. The analysis resulted in a THD factor of 0.82%. Figure 13 shows the THD results from the VSC using the SVPWM controller. The analysis was carried out using a fast Fourier transform (FFT) with the Power graphical user interface (GUI). The FFT analysis of the VSC using the SVPWM controller was performed for one cycle among 50 cycles at a converter voltage of 323.9 V. The analysis resulted in a THD factor of 0.82%. THD Analysis Energies 2020, 13, 1259 12 of 16 Figure 13 shows the THD results from the VSC using the SVPWM controller. The analysis was carried out using a fast Fourier transform (FFT) with the Power graphical user interface (GUI). The FFT analysis of the VSC using the SVPWM controller was performed for one cycle among 50 cycles at a converter voltage of 323.9 V. The analysis resulted in a THD factor of 0.82%. Figure 13. The THD of the converter using the SVPWM controller. Figure 14 shows the simulation test of the THD on the load side. The simulation test was carried out in the FFT window of the Power GUI for one cycle at a voltage of 168 V. The load yield will be 168 V at the fundamental frequency with a THD of about 0.05%. On observing the results, it was clear that the proposed controller produced better efficiency and a harmonic distortion of 0.82% in the converter section and 0.05% on the load side when compared with the conventional SVPWM controller. Figure 13. The THD of the converter using the SVPWM controller. Figure 14 shows the simulation test of the THD on the load side. The simulation test was carried out in the FFT window of the Power GUI for one cycle at a voltage of 168 V. The load yield will be 168 V at the fundamental frequency with a THD of about 0.05%. On observing the results, it was clear that the proposed controller produced better efficiency and a harmonic distortion of 0.82% in the converter section and 0.05% on the load side when compared with the conventional SVPWM controller. Experimental Results The hardware configuration of an SVPWM-controlled solar inverter is shown in Figure 15. The solar panel was kept outside, and the rating of the solar panel is as shown in Table 2. Experimental Results The hardware configuration of an SVPWM-controlled solar inverter is shown in Figure 15. The solar panel was kept outside, and the rating of the solar panel is as shown in Table 2. The generated inverter output was connected to a nonlinear inductive load with a rating of 3 KW. The VSI was made to operate as a STATCOM to compensate for the reactive power. A digital modulation technique was designed and used to switch the inversion system. The digital switching technique provides advantages as it works with '0' and '1'. This technique provides higher precision, enhanced system performance, and better system stability and flexibility. The use of digital SVPWM allows the entire system to utilize 100% of the DC link bus voltage. The system was tested for THD, and the THD value was found to be less than 2.7% for a linear load and less than 3.15% for a nonlinear load. The response of the system under dynamic conditions was less than 2.5% at full load with a recovery time of <1.25% (<25 ms). The THD of the grid-connected inverter with the SPWM controller was 0.71%, whereas the THD of the grid-connected inverter with the SVPWM controller was 0.11%. Hence, the overall efficiency of the inverter was improved. The result of a comparative analysis of the grid-connected solar inverter using the SPWM controller and the grid-connected solar inverter using the SVPWM controller is shown in Table 3. The system's performance was tested with different load ranges, and its efficiency is tabulated in Table 4. The efficiency in the case of the SPWM controller was low, with a value of 75.91%, due to the lower utilization of the bus voltage and a more distorted output. The distorted output had an average THD value of 0.71%. The maximum efficiency was attained when using the SVPWM controller as the bus voltage was completely utilized. With the SVPWM controller, the THD value was reduced to an average of 0.11% and the system's efficiency was around 90.5% at maximum load. The solar output was connected to the inverter board. The fuzzy-logic-based SVPWM controller controlled the inverter board. The inverter board was configured with six IGBT switches (model: FGB20N60SFD) controlled by the SVPWM controller. From the experimental circuit, it was observed that, at T c = 25 • C and an inductive load of 250 W, the increase in time was 15 ns and the loss across the switches was 0.17 mJ during off-time and 0.35 mJ during on-time. At a full load of 1.5 KW, it was observed that the increase in temperature was 125 • C and the losses across the switches were 0.58 mJ during off-time and 0.20 mJ during on-time. The power switches were working with less than six active states. The voltage and current were more or less in phase, in which the measured power factor was unity. The V out of the inverter was found to have no distortion and, hence, the system was able to attain the maximum efficiency of 90.5% at a load of 1.25 KW. As the amount of distortion was reduced, the system's performance and its efficiency improved. The generated inverter output was connected to a nonlinear inductive load with a rating of 3 KW. The VSI was made to operate as a STATCOM to compensate for the reactive power. A digital modulation technique was designed and used to switch the inversion system. The digital switching technique provides advantages as it works with '0' and '1'. This technique provides higher precision, enhanced system performance, and better system stability and flexibility. The use of digital SVPWM allows the entire system to utilize 100% of the DC link bus voltage. The system was tested for THD, and the THD value was found to be less than 2.7% for a linear load and less than 3.15% for a nonlinear load. The response of the system under dynamic conditions was less than 2.5% at full load with a recovery time of <1.25% (<25 ms). The THD of the grid-connected inverter with the SPWM controller was 0.71%, whereas the THD of the grid-connected inverter with the SVPWM controller was 0.11%. Hence, the overall efficiency of the inverter was improved. The result of a comparative analysis of the grid-connected solar inverter using the SPWM controller and the grid-connected solar inverter using the SVPWM controller is shown in Table 3. The system's performance was tested with different load ranges, and its efficiency is tabulated in Table 4. The efficiency in the case of the SPWM controller was low, with a value of 75.91%, due to the lower utilization of the bus voltage and a more distorted output. The distorted output had an average THD value of 0.71%. The maximum efficiency was attained when using the SVPWM controller as the bus voltage was completely utilized. With the SVPWM controller, the THD value was reduced to an average of 0.11% and the system's efficiency was around 90.5% at maximum load. Conclusions In the present study, a microgrid model with a fuzzy-logic-based SVPWM controller was designed to provide excellent output performance, optimized efficiency, and high reliability. The effect of the integration of the system was tested with an installed solar system with an induction motor as a load. The designed control structure's effectiveness was tested in MATLAB/SIMULINK in terms of circulating current, dynamic response, and THD. A Z-source converter with an input/output filter was presented to reduce the large harmonic component. The dynamic response of the system was good under transient conditions. The experimental results show that the Z Source converter produced no voltage spike across the switch. The utilization of the DC bus voltage was increased by 15% and the harmonic distortion was decreased compared with the conventional PWM technique. By using SVPWM fuzzy switching, the peak switch current at the time of switching and losses in the switches can be reduced. Therefore, there is less stress on the converter's controls and the perceptible noise can also be minimized. Effectual means for power transformation between the source and the load in a wide range of electric power conversion applications was developed with the design of the Z-source VSC with a fuzzy-logic-based SVPWM controller. The system's THD was tested, and it was found that the THD value was less than 2.7% for a linear load and less than 3.15% for a nonlinear load. The response of the system under dynamic conditions was less than 2.5% at full load with a recovery time of <1.25% (<25 ms). The THD of the grid-connected inverter with the SPWM controller was 0.71%, whereas the THD of the grid-connected inverter with the SVPWM controller was 0.11%. Hence, the overall efficiency of the inverter was improved. The results of a comparison, in terms of THD level, between the SPWM and SVPWM controllers in islanding mode are presented in Table 3. The designed fuzzy-logic-based SVPWM controller provides an increase in bus voltage utilization, less current harmonic content, a reduction in voltage spikes across the switches, a reduction in harmonic distortion, an improvement in power quality, and a completely stable system.
6,504.4
2020-03-09T00:00:00.000
[ "Engineering" ]
ltivation” for electrocatalysis: controlled synthesis of Nature-inspired hierarchical nanostructures† Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. "Nano-garden cultivation" for electrocatalysis: controlled synthesis of Nature-inspired hierarchical nanostructures † Xiaoyu Yan, a Yang Zhao, a Jasper Biemolt, b Kai Zhao, a Petrus C. M. Laan, b Xiaojuan Cao a and Ning Yan * ab Three-dimensional intricate nanostructures hold great promise for real-life applications. Many of these hierarchical structures resemble shapes from Nature, demonstrating much improved physico-chemical properties. Yet, their rational design and controlled synthesis remain challenging. By simply manipulating (electro)chemical gradients using a combined hydrothermal and electrodeposition strategy, we herein show the controlled growth of Co(OH) 2 nanostructures, mimicking the process of garden cultivation. The resulting "nano-garden" can selectively contain different patterns, all of which can be fully phosphidated into CoP without losing the structural integrity. Remarkably, these CoP nanostructures show distinct catalytic performance in oxygen evolution and hydrogen evolution reactions. Under pHuniversal conditions, the CoP "soil + flower-with-stem" structure shows a much more "effective" surface area for gas-evolving reactions with lower activation and concentration overpotentials. This provides superior bifunctional catalytic activity for both reactions, outperforming noble metal counterparts. Nature holds solutions to diverse scientic problems and serves as a major source of inspiration for human beings. The billions of years of evolution has led to the formation of numerous geographical and biological structures with stunning complexity. By studying these natural patterns and mimicking the topographic structure in the materials synthesis, researchers oen nd exciting solutions to optimize the physico-chemical properties of materials. In particular, with the bloom of nanotechnology, the rational synthesis of Nature-inspired hierarchical shapes at the nanometric scale becomes of great importance, showing potential for real-life applications in catalysis, 1 electronics, 2 optics 3 and bio-medicine. 4 A common approach for preparing nanostructures is top-down lithography. 5 This subtractive method includes photo-, 6 electron-beam-, 7 and nanoimprint-lithography. 8 It can indeed control the topology of the pattern accurately and is widely used in the nanofabrication of semiconductors. [9][10][11][12][13][14][15] Nonetheless, this approach is rather time-consuming and cost-ineffective, hampering its wide applications in different elds. 15,16 Therefore, the bottom-up synthesis is increasingly explored which promises to address the drawbacks of the top-down counterpart. This additive strategy is generally based on the self-assembly of molecules and/or nanoscale building blocks. 17,18 Interesting nanostructures resembling natural shapes can be obtained. For instance, using solvothermal or hydrothermal strategies, ower-, 19-23 leaf-, 24,25 coral- [26][27][28][29][30] and seaurchin-like nanostructures have been reported recently, 31,32 demonstrating much enhanced performances in different applications. However, the biomimetic appearance of these patterns during the synthesis, in many cases, is not based on predictive mechanisms. 33 Besides, costly additives and surfactants are oen used for geometry control, and large-scale material preparation is thus challenging. [34][35][36] Electrodeposition is a simple alternative approach of nanofabrication which can be dated back to 1987 in the pioneering work of Martin et al. 37 Yet, a template is oen required for the synthesis and plating three-dimensional intricate structures at the nanometric scale remains problematic. Though the coprecipitation approach has recently allowed precise control of the formation of complex micrometer-scale shapes with amazing beauty, downsizing the shape to the nanometer scale is yet impossible. 18,36,38,39 In this work, via coupling the hydrothermal and electrodeposition synthesis approaches under different reaction conditions, we managed to design and prepare a number of Co(OH) 2based nano-architectures that resemble various items in a garden ("soil", "ake", "sprout", "grass", "ower" and "leaf"). Such patterns can be fully phosphidated, forming CoP structures. The suitable combination of the nanometric items on the surface of carbon cloth provides superior bifunctional catalytic a School of Physics and Technology, Wuhan University, Wuhan, China. E-mail: n.yan@ uva.nl activity for overall water splitting, outperforming the state-ofthe-art catalysts at high current densities. Fig. 1a is the schematic cartoon of the step-by-step "nanogarden cultivation" on the carbon cloth (CC). The distinct topologies were sustained aer phosphidation as shown in the corresponding scanning electron microscopy (SEM) images. The gardening started with "earthing" by encapsulating the bers of CC with a dense layer of Co(OH) 2 via a modied hydrothermal approach. The detailed synthesis can be found in the ESI. † On this "soil" layer with a thickness of $100 nm (cf. the SEM image of the pristine ber in Fig. S1 †), many "sprouts" appeared aer increasing the temperature of the hydrothermal synthesis. They were strongly "rooted" in the soil which is benecial for the robust catalysis in gas-evolving reactions without suffering break-up or detachment (vide infra). The continuous growth nally led to the formation of "grass", covering the entire soil land (see Fig. 1d and S2 †). These grasses have a high aspect ratio with an average length of 1.5 mm and a thickness of 100 nm. By increasing the Co 2+ concentration from 0.13 M to 0.3 M, they can progressively evolve into "lithic akes" and "rocks" as shown in Fig. S3. † The "blossom" of the "bush" was enabled by electrodeposition of Co(OH) 2 following the reactions (1) and (2) whereas the competing reactions can be: In 0.04 M Co 2+ -electrolyte, the electrodeposition dominantly proceeded from the tip of the grass stem, where the radius of curvature is smaller with a higher space charge density. At the lower parts of the stem, electrodeposition was suppressed as the competitive water reduction reaction (reaction (4)) occurred. 41,42 The ramied growth of the "ower petals" was triggered by the electroconvection of the solution at the tip, resulting in the continuous splitting of the deposits into branches to all directions. 43 This growth mechanism is detailed in the ESI in Fig. S4. † 44,45 Aer 300 s deposition, a sphere-like ower, with a diameter of ca. 200 nm, was formed on the top of the stem (Fig. 1e). The SEM image in Fig. S5 † with a lower magnication implies that nearly all the stems have "bloomed". On the contrary, electrodeposition can also initiate from the bottom of the stem when a 0.07 M Co(NO 3 ) 2 electrolyte was employed. In this more concentrated solution, the competing water reduction reaction was largely suppressed while reaction (1) prevailed. The formation of Co(OH) 2 can therefore start from the bottom of the stem. This nally caused the deposition of the "leaf" pattern with the same orientation as shown in Fig. 1f. From the microscopic point of view, each leaf was in fact the interwoven dendritic structure of the deposits. The bottom side of the leaf spanned more than 0.5 mm and the inter-leaf distance was ca. 200 nm on average. A complete summary of all the nanogarden items via controlled synthesis is shown in Fig. S6. † X-ray diffraction (XRD) patterns in Fig. S7a † compare the asprepared crystal structures of all the nanostructures before and aer phosphidation (denoted as soil/CC, sprout/CC, ake/CC, grass/CC, leaf/CC and ower/CC). The as-prepared precursor shows diffraction peaks at 26 46 The incorporation of carbonate was due to the interaction with atmospheric CO 2 before the XRD measurement. This is also evidenced by the Fourier-transform infrared spectra (FTIR) in Fig. S7b. † Aer phosphidation, all samples were fully converted to CoP. The patterns show four distinct peaks at 31.7, 36.2, 46.3, and 48.2 , which can be assigned to the (011), (111), (112) and (211) planes of the orthorhombic CoP phase (JCPDS no: 29-0497), respectively. [47][48][49][50] Particularly, the corresponding SEM X-ray energy dispersive spectra (EDX) and the high-resolution transmission electron microscopy (HRTEM) micrograph in Fig. S8 and S9 † also veried the formation of CoP. They also indicated that the ower petals and stems both uniformly consisted of CoP. We also performed X-ray photoelectron spectroscopy (XPS), conrming the formation of CoP. The Co 2p XPS spectrum in Fig. S7c † shows both 2p 3/2 and 2p 1/2 peaks in which the ones at 803.2 and 786.0 eV were the shake-up satellites. 27,51 The peak at 781.8 eV was assigned to CoP and other oxidized forms of Co while the one at 778.1 eV was ascribed to the residual metallic Co. [52][53][54] The reduced state of P in CoP was also seen in the P 2p spectrum at 129.0 eV in Fig. S7d. † 43,55,56 We then examined the catalytic activity of all the CoP nanostructures in the hydrogen evolution reaction (HER) under acidic conditions. A standard three-electrode conguration was applied with a stationary working electrode to better simulate industrially relevant conditions. Fig. 2a compares the polarization curves (LSV, linear sweep voltammetry) of CC, soil/CC, sprout/CC, ake/CC, grass/CC, leaf/CC and ower/CC in 0.5 M H 2 SO 4 aqueous solution. CC showed little activity for HER with an extremely high overpotential. The onset potential, dened as that at 1 mA cm À2 Faraday current, reached À325 mV (vs. RHE and hereaer other than specied). Interestingly, although other samples shared an identical chemical composition, their catalytic activity varied signicantly. The ower/CC had the highest onset potential of À36 mV. The overpotentials at the benchmark 10, 20, 50 and 100 mA cm À2 current density were 68, 85, 103 and 112 mV, respectively (see Fig. 2b). These values are among the best of today's superior non-precious metal HER catalysts in an acidic environment. A detailed comparison can be found in Table S1. † Fig. 2c shows the Tafel plots of all the catalysts. Remarkably, the Tafel slope of ower/CC was 68 mV dec À1 , which was also the lowest among all the nanostructures. In addition, to evaluate the difference in intrinsic catalytic properties, the turnover frequency (TOF) was also plotted versus the overpotential (see Fig. S10 †). The ower/CC catalysts illustrated signicantly larger TOF values than those of leaf/CC in acidic environments, particularly at higher overpotentials when the mass transport limitation appeared. Specically, the TOF of ower/CC reached 1.3 s À1 at an overpotential of 120 mV. This is in accordance with the electrochemical impedance spectra (EIS) in Fig. S11. † The Nyquist plot of the ower/CC exhibits a smaller charge-transfer resistance than that of the leaf/CC, suggesting faster kinetics at the interface. The better performance of ower/CC mainly pertained to its physical geometry. Apparently, soil/CC and sprout/CC had lower surface areas and smaller numbers of active sites, which unsurprisingly showed limited activity. This was proven in the electrochemically active surface area (EASA) plots shown in Fig. S12. † A positive correlation between the HER performance and EASA can be drawn. Albeit that leaf/CC and ake/CC owned a larger number of active sites as reected by the EASA, the interlayer distance between two leaves or akes was less than 200 nm while the height of the layer was >2 mm (cf. the SEM images). When the hydrogen bubbles evolve at high current density, the reaction on the surface at the bottom side of the densely packed structure might suffer from slow mass transport, 57 and the activity became increasingly lower than that of the ower/CC at higher current density. Conversely, the geometry of ower/CC was superior to the controls: the owers on the top of the stems, comprising many nanometric petals, were all well exposed to the bulk electrolyte. Yet, the loosely packed stems, with much less CoP deposits, can also effectively catalyze HER (see the cartoon in Fig. 2d). Thus, such hierarchical structure provided complementary features in terms of abundant active sites and rapid mass transport. This set of experiments also implied that the physical geometry of nanomaterials can greatly affect the catalytic activity. Particularly for gas-evolving electrocatalytic reactions, the densely pack nanowires or nano-arrays must be netuned to minimize the overpotentials arising from the lack of effective active sites and the limitation of mass transport. 57 The stability test of ower/CC was carried out using both cyclic voltammetry (CV) and chronoamperometry. Fig. 2e demonstrates the comparison of the LSV curves of the catalyst aer 0, 1000 and 2000 CV cycles in the voltage window from +0.2 to À0.3 V (vs. RHE). Little degradation was observed in comparison with the initial performance (the morphologies of the spent catalysts are shown in Fig. S13 †). The overpotential showed a 2 mV increase at 100 mA cm À2 . In the chronoamperometric analysis, the voltage was set at À0.094 V. The current density was maintained at ca. 31 mA cm À2 for more than 30 h. This suggested that the nanoowers rooted in the "soil" have excellent structural robustness and this "nanogarden cultivation" approach has effectively increased the activity without any compromise of the stability. The structural advantage of ower/CC was also proven in the HER performed under both neutral and alkaline conditions. Fig. 3 summarizes the electrocatalytic performance of both ower/CC and leaf/CC. In a 1.0 M phosphate buffer solution (PBS), ower/CC remained the best catalyst. The onset potential was À36 mV whereas the overpotentials at 10 and 50 mA cm À2 were 72 and 138 mV, respectively. In the stability test via the analogous approach used in acidic media, ower/CC also exhibited excellent robustness. Interestingly, it seemed that the catalyst was "activated" during the CV cycles or the chronoamperometric test, showing an activity jump. This phenomenon might be pertinent to the formation of low-valence Co complexes, such as Co(OH) x , on the surface of CoP under the cathodic potentials. In addition, the progressive incorporation of phosphonic acid pendant groups onto the surface increased the proton-accepting capability. Beneting from these proven synergistic effects, the HER activity was enhanced aer the longevity test. 58,59 In 1.0 M KOH, ower/CC CoP still exhibited an ultra-low overpotential of 55 mV at the benchmark 10 mA cm À2 , and the Tafel slope was 56 mV dec À1 . Likewise, the overpotential difference relative to leaf/CC became increasingly large when the current density rose, indicating that concentration polarization was also suppressed in ower/CC compared with that of leaf/CC. In the longevity test, only trivial degradation ($6%) was recorded during the chronoamperometric test biased at À0.082 V. We also compared the HER performance of ower/CC with the state-of-the-art non-noble metal catalysts in the literature (see Tables S2 and S3 †). The activity was indeed among the best. This further supports our aforementioned structural advantages of ower/CC at high reaction rates. Therefore, we concluded that the CoP with a ower-with-stem structure rooted in a soil layer enabled superior HER activity under pH-universal conditions. Apart from HER, the oxygen evolution reaction (OER) is another typical and important gas-evolving reaction for energy storage and conversion. 60 We then evaluated the electrocatalytic performance of these CoP nanostructures in 1.0 M KOH electrolyte. Fig. 4a compares the LSV curves of ower/CC and leaf/CC with those of the state-of-the-art IrO 2 /CC OER catalysts. The onset potential of all the examined catalysts was essentially identical. However, at high current densities, both ower/CC and leaf/CC showed substantially improved activity. In particular, the overpotential of IrO 2 /CC at 50 mA cm À2 was 417 mV while that of ower/CC was only 324 mV. The outstanding OER performance of ower/CC was also supported by the low Tafel slope which was 73 mA dec À1 . Both CV and chronoamperometric studies conrmed the stability of this material. A detailed comparison of the studied catalysts with those reported in the literature is summarized in Table S4. † Finally, we carried out overall water splitting reactions using a symmetric two-electrode setup. The anode and cathode catalysts were completely identical to investigate their bifunctionality. Fig. 4e compares the LSV curves of ower/CC and leaf/CC cells. At the benchmark 10 mA cm À2 , the overall overpotential was 370 mV for ower/CC which showed better performance. This potential was 49 mV higher than the sum of the two overpotentials obtained in the three-electrode setup (55 mV for HER and 276 mV for OER). We postulated that this difference might have originated from the additional polarization caused by the mass transport. In the stability test shown in Fig. 4f, we biased the cell at 1.78 V vs. open circuit potential (OCP) for 30 h, and the current density was stabilized at $29 mA cm À2 with little degradation. The redox stability was studied by cycling the cells from 1.0 to 1.6 V vs. OCP. No loss was observed aer 2000 cycles (see the inset of Fig. 4e). In conclusion, we developed a simple yet rational approach for accurately controlling the morphology of CoP catalysts at the nanoscale via the combined hydrothermal and electrodeposition approach. It enabled us to synthesize a number of Natureinspired hierarchical nanostructures with distinct catalytic activity in water splitting reactions. Under pH-universal conditions, the "soil + ower-with-stem" is superior, showing a much more "effective" surface area for gas-evolving reactions with lower activation and concentration overpotentials. This work provides a simple approach for controllable synthesis, reveals the importance of ne-tuning the architecture of electrocatalysts and might open bona de opportunities for the catalysis and materials science community in the context of sustainable energy research. Conflicts of interest There are no conicts to declare.
4,025
2020-04-28T00:00:00.000
[ "Materials Science" ]
Dark Photon Studies at BABAR An overview of the most recent results by BABAR on the search of dark photon and dark sector gauge bosons is presented, with larger emphasis on dark photon decays in invisible channels and searches for muonic dark force. for its production could be as large as some tenths of fb, and the sensitivity for such a signal would be proportional to the integrated luminosity (but inversely proportional to the available center-of-mass energy). The ways in which such particles could be observed depend on their mass and coupling, also relative to other DM states belonging to the same Dark Sector. In general, dark photons can always appear whenever SM photons are produced, with an additional ǫ 2 Y suppression factor. As the lightest particle of the Dark Sector, the dark photon is expected to decay to di-leptons or light di-hadron pairs, and it could be observed as a narrow resonance (Γ ∼ mǫ 2 Y ) in the invariant mass spectrum of the pair. If, on the other hand, some lighter and stable enough DM states exist (χ), one would expect the dark photon to decay to invisible channels. These decays would be dominating as not ǫ Y suppressed. Were the χ states unstable, one would expect more complicated signatures in which many leptons could appear, as a consequence of off-shell patterns including also Higgs and heavy leptons and quarks loops. Search for Dark Photon visible decays at BABAR BABAR [4] was one of the first experiments where searches of dark photon were performed. In its 10 years life (with the last data-taking in 2008), BABAR collected more than 500 fb −1 of e + e − annihilation events, with beams delivered by the PEP-II machine at SLAC at several energies corresponding to Υ(2, 3, 4S) bottomonium excitations. Even though the integrated luminosity is going to be overtaken soon by the upcoming BELLE-II experiment, the Υ(2S) sample collected by BABAR will remain the richest ever for many years to come. The first results on dark photon searches by BABAR were published in a well-known paper in 2014 [5], in which the study of a dark photon production in the e + e − → γA ′ radiative return reaction and its following decay in e + e − or µ + µ − pair was presented. This analysis went along in parallel with searches of new (beyond SM) pseudoscalar particles -in particular belonging to the light Higgs sector, produced by e + e − → hA ′ Higgs-strahlung followed by light Higgs decays into a dark photon pair h → A ′ A ′ → (ℓ + ℓ − )(ℓ ′+ ℓ ′− ) [6], as well as dark gauge bosons produced in dark photon decays (e + e − → γA ′ , A ′ → W ′ W ′′ → (ℓ + ℓ − )(ℓ ′+ ℓ ′− )) [7]. No observation was reported for dark photons neither for other new "exotic" objects; however, BABAR could provide a wide exclusion area in the (m A ′ , ǫ Y ) Dark Sector parameters plane, shown in Fig. 1 along with some other regions excluded by other experiments. The largest local significance for a possible dark photon signal was found to be 3.4σ at 7.02 GeV for decays in the e + e − channel, and 2.9σ at 6.09 GeV for the µ + µ − one, both being not enough to claim its presence. Together with some more recent results by NA48/2 [8], the BABAR measurement almost excludes the parameter region which would favor a vector Dark Sector as an explanation for the observed (g − 2) µ discrepancy. Search for Dark Photon invisible decays at BABAR The signature for such events, in which the dark photon decays into a pair of light DM χ particles which go undetected, is just a single photon, produced in the e + e − → γA ′ reaction [9]. This involves critical experimental issues in requiring an efficient trigger on single photons and a large hermeticity of the detector, to avoid as much as possible acceptance holes in order to limit the contribution of overwhelming background sources with γ's which can be lost or convert in the detector materials (mainly, 2γ and radiative Bhabha events). As far as the trigger is concerned, BABAR could collect about 53 fb −1 of single photon events in the last part of its running period (mostly at Υ(2S) and Υ(3S) Figure 1. Compilation of existing dark photon constraints in the parameter space ǫ Y vs dark photon mass, for decays in light lepton pairs. energies), in which a first level hardware trigger was applied requiring at least one cluster in the electromagnetic calorimeter associated to a photon with energy larger than 800 MeV. At third level, two software trigger lines were set-up which distributed the total sample in two subsets according to the center-of-mass photon energy: a high energy sample (E ⋆ γ > 2 GeV), corresponding to a low mass recoiling A ′ (−4 < m 2 A ′ < 36 GeV 2 ), and a low energy one (E ⋆ γ > 1 GeV), corresponding to a higher A ′ mass region, 24 < m 2 A ′ < 69 GeV 2 . In both cases events with tracks coming from the interaction vertex were not accepted. The two samples needed a separate treatment due to a different nature of the background in the two energy regions, and the search of dark photon was performed independently in each of them. Some common criteria were however applied to ensure the detection of a good quality photon well contained in the detector acceptance, without extra charged tracks in the event and very limited electromagnetic noise in the calorimeter. The missing 4-momentum vector, which corresponds to the undetected A ′ , was also constrained by tight fiducial cuts in order to prevent leakage effects and conversions, for instance on the electromagnetic calorimeter crystal edges. The external BABAR muon detector information was also exploited as a veto for events in which conversions could have occurred. Machine learning techniques (Boosted Decision Trees, BDT's) were applied to effectively separate signal from background events. The search for dark photons proceeds through a scan of the missing mass (m X ) plot in sliding windows, with a width multiple (of the order of 20×) of the photon resolution: in each mass slice the spectrum is fitted by a Crystal Ball function to reproduce the possible signal, plus a background function whose shape changes according to the selected data set. In the low mass sample, the main source of background is represented by 2γ events in which one of the photons is lost, and its contribution peaks at m X = 0; in addition to this, a smooth contribution develops at higher masses, produced by radiative Bhabha events. On the other hand, the peaking contribution is missing in the higher mass selection, where only a tail of the 2γ reaction contributes while the largest part of the background comes from radiative Bhabha events where the e + e − pair is lost due the detector acceptance. Suitable fits were performed simultaneously on samples taken at different center-of-mass energies, with proper selections on the BDT discriminant controlling the amount of background in each mass slice, for low and high mass samples; fits on control samples populated mainly by background events allowed the shape of the background contribution to be fixed and used with constant parameters in addition to the possible resonant signal. The local significance of the signal was estimated through the likelihood ratio S = sign(N sig ) 2 log(L/L 0 ) in which N sig corresponds to the integral of the Crystal Ball function used to reproduce the signal contribution, after background subtraction, and L and L 0 are the likelihoods of the fits with and without the inclusion of the resonant signal, respectively. The global maximum significance, extended over all the analysed samples, did not exceed 2.6σ (local maximum at 6.21 GeV), which is not enough to claim for any new structure. However, the evaluation of the upper limit for the production cross section at 90% C.L. could be performed, and from this the exclusion regions in the dark photon parameter space could be extracted. The areas excluded by BABAR are shown in Fig. 2: the full band of parameters explaining the (g − 2) µ anomaly as a consequence of the existence of the dark photon is ruled out, down to dark photon masses on the order of the MeV. Regions of A ′ parameter space (ǫ Y vs m A ′ ) excluded by BABAR analysis [9] of invisible decays, superimposed to previous constraints and the band preferred at 2σ for the explananation of the (g − 2) µ anomaly. Search for Muonic Dark forces An alternative search for a Dark Sector vector mediators which could be the carriers of a new symmetry favoring the interaction between heavy leptons (L µ − L τ gauge interaction) was performed by BABAR looking at multi-muonic final states [10]. These could be reached through the reactions e + e − → µ + µ − Z ′ , Z ′ → µ + µ − , in which the Z ′ vector gauge boson is produced via radiation from muons (or taus), and subsequently decays to a µ + µ − pair. Such a model [11] could explain several observed anomalies: beyond the already mentioned (g − 2) µ one, the relative abundance on SM neutrinos compared to sterile ones, and the proton radius discrepancy puzzle. No observation of the Z ′ gauge boson has been made so far by experiments studying the inelastic interation of neutrinos with nuclei. Given the relatively small mass expected for Z ′ [12], e + e − annihilation could again be a suitable environment where to look for its signature: BABAR for the first time performed its search. The sample studied by BABAR included the full statistics, collected at all energies of the bottomonium peaks as well as off-peak (514 fb −1 ): events with four reconstructed muons carrying the full available energy were selected. The extra energy per event, carried by additional neutral particles and measured by the electromagnetic calorimeter, was required to be smaller than 200 MeV; muons were identified in pairs of the same charge. Events coming from the decay of Υ(3,2S) to Υ(1S)π + π − with the Υ(1S) subsequent decay in µ + µ − and π/µ misidentification were removed from the analysed sample. The invariant mass of the four muon system was studied in a 500 MeV wide window of the nominal center-of-mass energy: the main contribution to this region is given by 4µ QED background events. To look for a possible signal in the µ + µ − invariant mass system, the reduced invariant mass m R = m 2 µ + µ − − 4m 2 µ was studied, in 50σ wide sliding mass windows. Corrections had to be applied since the Montecarlo overestimated, in general, the QED background contribution, as a consequence of an imprecise treatment of ISR photon emission; some additional tracking and PID inefficiences had to be taken into account as well. As in the previous case, the search for a resonant signal in the µ + µ − system was performed separately in data samples taken at different energies, and the combination of fit likelihoods was performed only at a final stage. No significant signal was found in the mass range 0.212 < m Z ′ < 10 GeV, the maximum global significance from the likelihood ratio being just 1.6σ, with a local maximum of 4.3σ at 830 MeV. Again, it was however possible to derive the upper limit for the Z ′ production cross section and decay at 90% C.L., and from it to deduce the exclusion area in the (m Z ′ , g ′ ) parameter space (g ′ being the coupling constant to SM of this new gauge symmetry). Fig. 3 reports the excluded area by BABAR superimposed to those from other neutrino experiments. Figure 3. The 90% upper limit of the gauge coupling g ′ as a function of the Z ′ mass from BABAR [10], together with the constraints coming from the production of µ + µ − pairs in ν µ scattering experiments. Compared to neutrino experiments' results, BABAR is able to provide more extended exclusion ranges and fixes an upper limit for the g ′ coupling down to ∼ 7 × 10 −4 close to the dimuon threshold. Apart from a small mass region down a few MeV, BABAR is providing these powerful constraints under the hypothesis that the Z ′ gauge boson couples universally with muons and taus and their respective neutrinos, and it rules out almost completely the favoured band for the explanation of the (g − 2) µ anomaly by means of this portal. Conclusions A broad interest has arisen in the last years about the search for possible light Dark Matter candidates; among them, spin-1 gauge bosons are the most sought, as e + e − factories are the ideal places to study their production and decays, together with electron induced interactions at high intensity electron beam facilities [13]. So far, many experiments provided upper limits on their observations and excluded wide regions in the Dark Sector models parameters, due to no evidence of signal. The precision and quality of these measurement will be soon overtaken by the results with high statistics from Belle and the upcoming Belle-II experiments; however, among all experiments at colliders, the BABAR measurements are so far providing the best sensitivity for light Dark Matter searches. Moreover, the BABAR results are able to rule out almost completely the (g − 2) µ favored region, as well as the region of mass down to 1 MeV for dark photons decaying into the invisible channel. The field is nowadays very lively, with several analyses of BABAR data still ongoing on the subject: more novel results are expected soon.
3,294.2
2019-01-01T00:00:00.000
[ "Physics" ]
Implications of the XENON1T excess on the dark matter interpretation The dark matter interpretation for a recent observation of excessive electron recoil events at the XENON1T detector seems challenging because its velocity is not large enough to give rise to recoiling electrons of OkeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O}\left(\mathrm{keV}\right) $$\end{document}. Fast-moving or boosted dark matter scenarios are receiving attention as a remedy for this issue, rendering the dark matter interpretation a possibility to explain the anomaly. We investigate various scenarios where such dark matter of spin 0 and 1/2 interacts with electrons via an exchange of vector, axial-vector, pseudo-scalar, or scalar mediators. We find parameter values not only to reproduce the excess but to be consistent with existing bounds. Our study suggests that the scales of mass and coupling parameters preferred by the excess can be mostly affected by the type of mediator, and that significantly boosted dark matter can explain the excess depending on the mediator type and its mass choice. The method proposed in this work is general, and hence readily applicable to the interpretation of observed data in the dark matter direct detection experiment. Introduction Dark matter is a crucial ingredient in the cosmological history of the universe and accounts for about 27% of the energy budget in the universe today. As its existence is supported by galactic-scale to cosmological-scale gravity-based evidence, various experiments were performed, are operational, and are planned to detect dark matter via its hypothetical non-gravitational interactions with Standard Model (SM) particles. While no conclusive observations have been made thus far, the XENON Collaboration has recently reported an excess of electron recoil events over known backgrounds with an exposure of 0.65 ton·year [1]. The excess is shown below 7 keV and most of the events populate at 2 − 3 keV. The XENON1T detector is designed to have an extremely low rate of background events, so this excess could be considered as a sign of new physics. The XENON Collaboration has claimed that while the unresolved β decays of tritium can explain the excess at 3.2σ significance, the solar axion model and the neutrino magnetic moment signal can be favored at 3.5σ and 3.2σ significance, respectively [1]. It is expected that confirmation or rejection of these hypotheses will be done with more statistics in the near future. In the meantime, the authors of ref. [2] carefully investigated an alternative possibility that the XENON1T excess can be explained by dark matter and argued what type of dark matter would be the case. Their results indicate that the interpretation with conventional WIMP dark matter is less favored, essentially because of its non-relativistic nature. Indeed, for any conventional dark matter sufficiently heavier than electron, the scale of electron recoil (kinetic) energy is ∼ m e × (10 −3 c) 2 ≈ O(eV) with m e and 10 −3 c being the mass of electron and the typical velocity of dark matter near the earth, respectively. This simple order estimate suggests that the energy deposition by conventional dark matter is not large enough to accommodate the excessive events of O(keV). As supported by the observation made in ref. [2], however, it is possible to avoid this issue by envisioning non-conventional dark-sector scenarios involving a mechanism to exert a sufficient boost on a dark matter JHEP05(2021)055 component, rendering the dark matter hypothesis plausible enough to explain the excess. In particular, upon confirmation, the XENON1T anomaly can be the first signal to indicate that the associated dark sector is non-conventional, opening a new pathway toward dark matter phenomenology. Indeed, the authors of ref. [3] pointed out, for the first time, that the XENON1T experiment would be sensitive enough to the fast-moving χ 1 -which arises in the two-component boosted dark matter (BDM) scenario -interacting with electrons. Along this line, we entertain a class of non-conventional dark-sector scenarios to explain the XENON1T excess in this paper, in particular, focusing on the impact of the particle mediating the dark-matter-electron interactions in the context of the BDM scenario as a concrete example. The rest of this paper is organized as follows. We begin with a brief review on the models of BDM including the production mechanisms of BDM in the context of the XENON1T excess in section 2. To develop our intuition on the BDM parameter space relevant to the excess, we perform a model-independent single-bin analysis in section 3. Predicated upon the intuition, we choose a few benchmark mass points and define various BDM models in terms of the spins of BDM and the mediator in section 4. We then perform shape analyses of recoil electron spectra including detector resolution and efficiency, for different cases defined. Section 5 is reserved for the discussions on model consistency checks and potential impacts of the electron ionization form factor. Finally, our conclusions and a summary of our case studies appear in section 6. Dark matter interpretation As mentioned previously, it is challenging to accommodate the XENON1T anomaly using the ordinary halo dark matter since its typical velocity is too small to invoke keV-scale energy deposition on target electrons. Bosonic dark matter (e.g., axion-like particle and dark photon) of keV-scale mass could be absorbed, depositing its whole mass energy in the XENON1T detector. However, this is likely to give rise to a line-like signature, so that this possibility is less preferred by the observed recoil energy spectrum. Indeed, the XENON Collaboration found that no bosonic dark matter of mass within 1 and 210 keV shows more than 3σ significance unlike the other interpretations, so they simply set the limits for relevant dark matter candidates. The upshot of this series of observations is that dark matter (or more generally, a dark matter component) should acquire a sizable enough velocity to transfer keV-scale kinetic energy to a target electron. This approach has been investigated in ref. [2] where the authors claimed that fast-moving dark matter with velocity of O(0.1c) can fit in the XENON1T excess. An important implication of this way of dark matter interpretation is that the dark matter (candidate) responsible for the excess is not the (cold) galactic halo dark matter, i.e., it is a subdominant fast-moving component and hence the underlying dark matter scenario is not conventional. 1 Furthermore, it requires a certain mechanism to 1 We note that there are other proposals as explanations of the excess. See refs. [4][5][6] for bosonic dark matter absorption, refs. [6][7][8] for exothermic dark matter explanation, refs. [9][10][11][12][13] for non-standard neutrino interaction, ref. [14] for a possible explanation with conventional non-relativistic dark matter, and refs. [15][16][17][18][19] for other explanations of the excess. JHEP05(2021)055 "boost" this dark matter component in the universe today. There are several mechanisms and scenarios to serve this purpose, which were originally proposed for other motivations; semi-annihilation [20], (two-component) boosted dark matter scenarios [21][22][23], models involving dark-matter-induced nucleon decays inside the sun [24], and energetic cosmic-rayinduced dark matter [19,[25][26][27]. In this paper, we discuss the BDM scenario, focusing on implications of the XENON1T anomaly for various classes of models considering different spins of BDM and mediator particles. As will be discussed later, we thoroughly examine a wide range of parameter space that is consistent with current observation, including the detector efficiency, the detector resolution, and the ionization form factor. Reference [28] has performed a similar study for one class of model (the fermionic dark matter with a vector mediator) without proper treatment of the ionization form factor, focusing on one particular study point. The standard two-component BDM scenario [22] assumes two different dark matter species; one (say, χ 0 ) is heavier than the other (say, χ 1 ). Their stability is often protected by separate unbroken symmetries such as Z 2 ⊗ Z 2 and U(1) ⊗ U(1) . One of the species (usually the heavier one χ 0 ) has no direct coupling to SM particles, but communicates with the other species χ 1 . By contrast, χ 1 can interact with SM particles with a sizable coupling. Therefore, χ 0 is frozen out via the indirect communication with the SM sector with the "assistance" of χ 1 (a.k.a. "assisted" freeze-out mechanism) [21]. In other words, χ 0 pairannihilates to χ 1 while χ 1 pair-annihilates to SM particles. The relatively sizable coupling of χ 1 to SM particles renders it the negligible dark matter component while keeping χ 0 dominant in the galactic halo. In most of the well-motivated parameter space, conventional dark matter direct detection experiments do not possess meaningful sensitivity to relic χ 0 and χ 1 because of tiny coupling and negligible statistics, respectively. A phenomenologically intriguing implication of this model setup, particularly relevant to the XENON1T excess, is that χ 1 can acquire a sizable boost factor, which is simply given by the ratio of the χ 1 mass to the χ 0 mass, in the universe today. Therefore, one may look for the signal induced by such boosted χ 1 . Due to the small χ 1 flux [see also eq. (3.2)], it is usually challenging for small-volume detectors to have signal sensitivity, but ton-scale dark matter direct detection experiments can be sensitive to the boosted χ 1 signal [3,29]. As mentioned earlier, ref. [3] has performed the first sensitivity study for the boosted χ 1 interacting with electrons in XENON1T, LUX-ZEPLIN, and DEAP3600 experiments. Motivated by the proposal in ref. [3], the COSINE-100 Collaboration has conducted the first search for BDM-induced signals as a dark matter direct detector 2 and reported the results [31] including limits on the models of inelastic BDM [23]. Model-independent analysis Given the above-described BDM models, we first consider simple counting experiments in which we compare the expected number of events with the number of signal events reported by the XENON Collaboration. This simple unbinned analysis enables us to investigate the JHEP05(2021)055 BDM parameter space to (potentially) accommodate the XENON1T anomaly in a modelindependent fashion. We expect that this exercise (conservatively) eliminates regions of parameter space being inconsistent with generic BDM models. Therefore, the resultant allowed parameter space will serve as a basic guideline to choose benchmark points in section 4 in which we perform shape analyses. Denoting the χ 0 and χ 1 mass parameters by m 0 and m 1 correspondingly, we find that if m 1 is given by approximately 99.0−99.9% of m 0 , χ 1 coming from the pair-annihilation of χ 0 in the present universe can be as fast-moving as 0.04−0.14c. While this simple consideration determines the "desired" mass relation between χ 0 and χ 1 , not all mass values are favored by the excess aside from the various existing limits. More importantly, as will be discussed later, the "favored" velocity range can be significantly altered, depending on the underlying mass spectrum and particle types. To investigate these points more systematically, we first consider the number of signal events N sig as briefly mentioned above. It is well known that N sig is expressed as where F 1 , σ 1e , N eff e, tot , and t exp are the flux of boosted χ 1 near the earth, the total scattering cross-section of χ 1 with an electron, the number of effective target electrons in the fiducial volume of the XENON1T detector, and the total exposure time, respectively. Here σ 1e could be affected by the threshold and/or detection efficiencies for recoiling electrons if a significant number of events are populated in the region where the recoil electron energy is near the threshold and/or the associated efficiencies are not large enough. The last two factors are experimentally determined and their product can be deduced from 0.65 ton·year. Regarding N eff e, tot , we remark that the binding energy of electrons in the xenon atom is not negligible given the keV scale of recoiling electron kinetic energy. While the outermost electron (in the O shell) needs 12.1 eV [32] to get ionized, the innermost electron (in the K shell) requires an ionization energy of 34.6 keV [33]. Therefore, only some fraction of electrons can be target electrons for the BDM mostly inducing keV-scale energy deposition. Some works considered form factors to calculate the dark matter event rate to explain the XENON1T excess. For example, ref. [2] used the atomic excitation factor with relativistic corrections and ref. [34] considered the dark matter and ionization form factors, restricting to the N -shell and O-shell electrons. We here take a shortcut scheme, reserving a dedicated analysis for future work [35]. As a conservative approach, we consider electrons from three outermost orbitals (5p,5s and 4d), which are known to be the dominant contribution [34,36,37], i.e., the number of target electrons in a single xenon atom N eff e is taken to be 18 throughout our analysis. 3 Note that the largest ionization energy among the electrons belonging to the three orbitals is ∼ 76 eV [32] which would induce less than 5% uncertainty in estimating 2 − 3 keV energy deposition. Since we will consider energy resolution of ∼ 450 eV [38], we expect that the 0.1 keV level uncertainty is buried in the detector resolution. We also note that each of the N -shell and O-shell electrons gets excited with a different weight. We expect that this would make an O(1) effect, so our JHEP05(2021)055 plane. The result here shows the BDM parameter space to potentially accommodate the XENON1T anomaly in a model-independent fashion. The gray-shaded lower-right area is disfavored because the expected maximum electron recoil energy is less than the typical energy associated with the observed excess, i.e., E max r < 2 keV. The upper region requires large cross-sections which can result in too small mean free paths (¯ 1 ∝ 1/σ 1e ) inside the earth to reach the XENON1T detector. We show a mean free path value at E 1 = 100 MeV for reference. Three diagonal lines represent the velocity of BDM for a given choice of the (m 1 , E 1 ) pairs. findings and conclusions in the analysis would remain valid. We will revisit this aspect before we conclude our study. The estimate of flux F 1 depends on the source of BDM, and we consider here the χ 1 coming from the galactic halo for illustration. Assuming that the χ 0 halo profile follows the Navarro-Frenk-White profile [39,40], we see that F 1 from all sky is given by [22] where the velocity-averaged annihilation cross-section σ 0→1 v is normalized to 5 × 10 −26 cm 3 s −1 to be consistent with the observed relic abundance. Note that the flux is proportional to inverse mass square, so roughly speaking a large (small) m 0 prefers a large (small) value of σ 1e to reproduce the excessive number of events of XENON1T. In figure 1, we present the maximum recoil energy, i.e., the kinetic energy of the electron scattered off by BDM [22,23], where p 2 1 = E 2 1 − m 2 1 and s = m 2 1 + m 2 e + 2m e E 1 with E 1 being the total energy of boosted χ 1 , and required BDM-electron scattering cross-sections to have 100 recoil events at the XENON1T detector with the 0.65 ton·year exposure, σ 100 1e , in the (m 1 , E 1 ) plane. Note that although the number of excessive events is about 50, the nominal number of signal JHEP05(2021)055 E max r must be at least 2 keV because the observed excess is pronounced most at 2 − 3 keV. The disfavored region of E max r < 2 keV is gray-shaded. From eqs. (3.1) and (3.2), , so the required cross-section increases quadratically in E 1 . One should keep in mind that too large σ 100 1e is constrained by too short a mean free path and (potentially) by various experimental bounds on the mediator mass and the associated coupling. We will discuss these issues in the context of specific benchmark points later. To study the model-dependence of the BDM scattering cross-section, we consider a vector mediator V µ , axial-vector mediator A µ , pseudo-scalar mediator a, and scalar mediator φ together with (Dirac-)fermionic BDM χ 1 and (complex-)scalar BDM ϕ 1 ; the seven different cases in total are summarized in table 1 with the renormalizable and Lorentzinvariant interaction terms and their associated coupling constants. For the PS and SS cases, the scale of mediator couplings to dark matter is normalized to m 1 . Assuming that the incoming χ 1 is much faster than the electrons in xenon atoms, we find that the spectrum in the kinetic energy of recoiling electrons E r with incoming BDM energy E 1 has the form of Note that the notations of the dark matter χ 1 and ϕ 1 are simplified as χ and ϕ in the couplings. Here |A| 2 is the spin-averaged amplitude squared, in which the denominator from the propagator contribution is factored out, and the expressions for the seven cases are also tabulated in table 1. JHEP05(2021)055 4 Case studies: shape analysis We are now in the position to look into the aforementioned cases, starting with (a) the vector mediator case, followed by (b) the axial-vector mediator case, (c) the pseudo-scalar mediator case, and (d) the scalar mediator case. As discussed in the previous section, figure 1 allows us to develop a useful intuition on what values of (m 1 , E 1 ) could provide a plausible explanation for the excess. Since the two dark matter components are assumed to be thermally produced, we assume that the mass of the heavier component (i.e., dominant relic) is larger than, at least, a few MeV. Therefore, if the plane displayed in figure 1 is divided into four quadrants, the third one (i.e., the lower-left region) will not be under consideration. In the first quadrant, we consider two possibilities, depending on the choice of the mediator mass, as the model points could show qualitatively different features in their recoil energy spectrum. By contrast, we find that a similar extra division is not necessary for the second quadrant. Hence, to develop the intuition on the differential spectrum, we consider three different regions of mass space: where m i is the mediator mass with i = V, A, a, φ and m 0 is again assumed to be greater than m 1 in all cases. The model points in (i), (ii), and (iii), in general, exhibit different kinematic features in their recoil-electron energy spectrum. We will illustrate them with three benchmark mass points (BPs) throughout this section. We first consider the VF case (i.e., fermionic BDM), displaying example unit-normalized recoil energy spectra in figure 2 for our benchmark parameter choices shown in the legend. The solid (E r ) and dashed (E obs r ) lines are for the spectra without and with the detector resolution and efficiency, respectively. We hereinafter include the detector efficiency f eff (E obs r ) and resolution σ res reported in ref. [1] and ref. [38], respectively. We implement the detector resolution, using the Gaussian smearing of σ res = 0.45 keV as follows, where E obs r is the smeared recoil energy which is what is observed in the experiment. Note that the recoil energy of the targets in the experimental results including that of the recent XENON1T is technically this E obs r in our notation. Hence we need to show the fitting result in terms of E obs r , not the un-smeared recoiling energy E r . Then the differential distribution of the observed recoil energy is given by where efficiency function f eff (E obs r ) is applied to the smeared recoil energy. The maximum recoiling energy is given in eq. (3.3), while the minimum kinetic energy of the recoiling electron is zero [23]. Note that we show the unit-normalized recoil energy spectra to facilitate the comparison among various scenarios and the contrast of the shape distortion. When showing the XENON data and our fit, we take into flux and detector information properly as mentioned in eq. (3.1) and the related discussion. In BP1 (red), the BDM χ 1 has a speed of v 1 = 0.06c, hence lies in the 68% C.L.favored region in figure 2 of ref. [2] as also supported by the typical recoil energy of O(keV). Furthermore, since m 1 , m V m e , E max r and E 1 ≈ m 1 , E r -dependent terms in both the denominator and the numerator of the associated differential cross-section simply become subdominant or negligible. Therefore, the theoretical spectral shape (i.e., without the detector resolution and efficiency) is almost flat over the allowed E r range in this limit: from which we find the total cross-section 4 to be This flat distribution [BP1 (red) in figure 2] can be distorted to a rising-and-falling shape by detector smearing and efficiency, as shown by the red dashed curve. As one can clearly 4 Our expression has mass dependence different from the finding in arXiv v1 of ref. [28]. Ours is proportional to m 2 e (vs. mem1 in arXiv v1 of ref. [28]), resulting in smaller estimates of cross-section. JHEP05(2021)055 see in ref. [1], the detector efficiency is very small for the recoil energy below ∼ 1−2 keV and practically flat above ∼ 3 keV. This is the reason that the red dashed line shows a rapidly increasing behavior (from zero) for 0.5 keV E r 3 keV, unlike the solid line. By contrast, the detector smearing develops a falling tail beyond E max r . In general, one can expect to see a rising behavior in the low energy due to the detector efficiency and a falling behavior in the high energy due to the detector smearing together with the underlying model details as shown in several solid curves. Whether the (somewhat) flat region is narrow or wide depends on the kinematics of the DM-electron scattering. For BP2 (green), we choose a mediator V lighter than electron. Unlike the previous case, the expected theoretical recoil energy spectrum is rapidly falling off: for which the total cross-section is dominated by the region of E r → 0. The reason is because the differential cross-section in electron recoil momentum is peaking toward small p e ( m e ) due to the t-channel exchange of V and this feature is more prominent for m V m e [41]. Once detector effects are included, events are expected to populate most densely around 2 − 3 keV (see the green dashed curve). However, a caveat to keep in mind is that too small m V values would lead most of events to lie below 2 keV since dσ 1e /dE r goes like 1/E 2 r . Our numerical study suggests that m V 5 keV would be favored by the data for the chosen (m 0 , m 1 ) pair. This observation motivates BP3 (blue) where the BDM even lighter than electron acquires a significant boost factor. An approximation similar to eq. (4.6) goes through with m 2 1 replaced by E 2 1 since E 1 m 1 . As also shown in figure 2, the differential spectrum is not much different from that of the second benchmark point, except a long tail beyond 7 keV which may not be appreciable at this earlier stage. Moreover, the spectrum with detector effects (blue dashed) is quite similar to that of BP2. This demonstrates that unlike the claim in ref. [2] the favored region can be extended further below ∼ 0.1 MeV and/or further beyond v 1 = 0.3c, as long as m V is smaller than m e . However, the preferred range of m V is more restricted than that of BP2. Our numerical study shows that m V 50 keV would result in more than half of events lying beyond 7 keV, so that 5 m V 50 keV would be favored for the chosen (m 0 , m 1 ) pair. The cross-section σ 1e also determines the mean free path¯ 1 in the earth, which is given by ∼ 1/( n e σ 1e ) with n e being the mean electron number density along the χ 1 propagation line. Here we assume that χ 1 has negligible interactions with nuclei. If g V e is too large [with g V 1 set to be O(1)], χ 1 may scatter multiple times inside the earth before reaching the XENON1T detector located ∼ 1, 600 m underground, resulting in a substantial loss of energy that χ 1 initially carries out. The situation becomes worse if χ 1 comes from the opposite side of the earth. As shown in eq. (3.1), F 1 and σ 1e are complementary to each other for a fixed N sig , i.e., a small F 1 would be compensated by a large σ 1e at the expense of multiple scattering of χ 1 . This scenario was explored in ref. [28]. In our study, we rather focus on the opposite case where σ 1e is small (hence no worries about the issue of too many scatterings) but sub-GeV (and smaller) χ 0 allows a large flux of χ 1 . In figure 3, we now show sample energy distributions for the three benchmark mass spectra taken in figure 2, assuming g V χ = 1 and galactic BDM whose flux is given by eq. (3.2). The values of σ 1e and g V e associated with these fits are shown in the legend. The black dots and the gray line are the data points and the background model (with negligible tritium contributions), imported from ref. [1]. JHEP05(2021)055 A few comments should be made for the quoted σ 1e and g V e values. First, the required σ 1e is of order 10 −35 − 10 −34 cm 2 resulting in more than ten thousand km (∼ the diameter of the earth) of mean free path, i.e., at most a handful of χ 1 scattering would arise inside the earth before reaching the XENON1T detector. See also the reference lines for σ 100 1e and 1 in figure 1. Second, there are mild differences among the quoted σ 1e values although the BDM flux is fixed for all three benchmark points. As discussed earlier, the nominal scattering cross-section to explain the excess can be different due to the detector effects. As suggested by figure 2, the green and blue curves are more affected by the detector efficiency since more events are expected to populate toward the lower energy regime. Therefore, these two points typically demand a nominal BDM scattering cross-section greater than that for the other one. Third, some of the reported g V e values might be in tension with existing limits, depending on the underlying model details. We will revisit this potential issue in the next section. Finally, we briefly discuss how the variation in the dark matter spin affects the conclusions that we have made so far for the VF case. We see that |A| 2 for the VS case is approximated to 16m 2 e E 2 1 just like the VF case, and therefore expect similar spectral behaviors. We find that the actual distributions look very similar to the corresponding ones with χ 1 for the same mass choices, holding similar conclusions. JHEP05(2021)055 (b) Axial-vector mediator. We next move onto the scenario with the axial-vector mediator and fermionic dark matter, i.e., the AF case. The differential cross-sections in this case are expressed as and the corresponding energy spectra for the three benchmark mass spectra are shown in the left panel of figure 4. The solid (E r ) and the dashed (E obs r ) lines are the spectra without and with detector effects, respectively. Sample fits with the effects of detector efficiency and resolution included appear in the right panel of figure 4. The values of σ 1e and g A e associated with these fits are shown in the legend. The differential energy spectra without detector effects show a box, a (relatively) slowly decreasing, and a rapidly decreasing shape, respectively for the three benchmark points. The (unit-normalized) spectra for BP1 and BP3 are similar to those in the VF case, whereas the spectrum for BP2 shows a somewhat different behavior. Let us briefly comment on the region (ii); in the E r → 0 limit, it rises up to whereas in the large E r limit, it is saturated to In other words, the spectrum gradually decreases toward this asymptotic limit, not falls off toward 0 like that in the region (iii). The differential energy spectra of BP1 and BP3 with the detection efficiency and smearing effects included are similar to those of the VF case, as depicted by the dashed red and blue lines in the left of figure 4. However, the final distribution of BP2 peaks around 2.5 keV, which would potentially give a better fit than that in the VF case. As we have briefly discussed above, this result is due to the E r -dependent terms in the numerator, coming from the axial-vector interactions. (c) Pseudo-scalar mediator. We perform similar analyses for the three regions of mass space discussed in the previous section. For fermionic dark matter χ 1 (i.e., the PF case), we find that the theoretical differential cross-sections have the forms of [Left] The corresponding unit-normalized plots with the same benchmark mass spectra as in figure 2 but with the axial-vector mediator and fermionic BDM (i.e., the AF case). The solid (E r ) and the dashed (E obs r ) lines are the spectra without and with detector effects, respectively. [Right] Sample energy spectra for the three benchmark mass spectra. We assume g A χ = 1 and galactic BDM for which the flux is given by eq. (3.2). The values of σ 1e and g A e associated with these fits are shown in the legend. and the corresponding energy spectra for the same benchmark mass spectra as in figure 2 are shown in the left panel of figure 5. Unlike the vector mediator case, the differential cross-section rises in increasing E r due to the E 2 r dependence in the numerators. For (i) the recoil spectrum increases up to E max r , whereas for (ii) it gradually saturates due to the competition with the E r dependence in the denominator. All these expected behaviors are clearly shown by the solid red and the solid green curves in the left panel of figure 5. Interestingly enough, the differential cross-section for (ii) becomes constant in the limit of m a → 0, and the m a dependence gets negligible. Therefore, if a small m a is preferred by the data, it may be challenging to determine m a . For region (iii), exactly the same spectral behavior as in region (ii) is expected. However, E max r approaches 9.75 MeV so that events accompanying keV-scale energy are very unlikely to arise. Indeed, the blue curve clings to the x axis. This rising feature of the recoil energy spectra implies that less events are affected by the XENON1T detector efficiency unlike the (ii) and (iii) regions with a vector mediator. In other words, nominal cross-sections differ not much from the corresponding fiducial crosssections. On the other hand, the total cross-section is much smaller than that of the vector mediator scenario for the same mass spectra and the same coupling strengths, because E 2 r dependence (i.e., ∼ 1 − 10 keV 2 ) is much smaller than E 2 1 dependence [see the discussions near eqs. (4.4) and (4.6)]. This implies that in order to obtain a required cross-section for a given BDM flux, a significantly larger coupling strength should be needed, compared to the corresponding value for the vector mediator. The right panel of figure 5 shows sample energy spectra for BP1 and BP2 with g a χ = 1 and galactic BDM, and clearly advocates all these expectations. The quoted σ 1e are slightly smaller than the σ 1e in figure 3. We also find that the required values of g a e are larger than g V e in figure 3 by roughly four orders of magnitude. They may be strongly disfavored by the existing limits. We again revisit this issue in the next section. [Left] The corresponding unit-normalized plots with the same benchmark mass spectra as in figure 2 but with the pseudo-scalar mediator and fermionic BDM (i.e., the PF case). The solid (E r ) and the dashed (E obs r ) lines are the spectra without and with detector effects, respectively. For the (iii) region, the spectrum is rising very slowly toward E max r ≈ 9.75 MeV so that events with keV-scale recoil energy are very unlikely to arise and the corresponding blue curve appears invisible. JHEP05(2021)055 [Right] Sample energy spectra for the first two benchmark mass spectra. We assume g a χ = 1 and galactic BDM for which the flux is given by eq. (3.2). The values of σ 1e and g a e associated with these fits are shown in the legend. When it comes to the case with scalar BDM (i.e., the PS case), we see that E r dependence in the numerator is linear so that the rising feature becomes mitigated. In particular, for regions (ii) and (iii) the recoil energy distributions can be described by a rising-andfalling shape, so it is possible to find ranges of parameter space to explain the XENON1T excess. We do not pursue an investigation to identify such parameter space here, reserving it for future work. (d) Scalar mediator. Given the discussions thus far, we are now equipped with enough intuitions to understand the scalar mediator case qualitatively. In the SF case, |A| 2 behaves like ∼ m 2 e m 2 1 for the (i) and (ii) regions, so the argument for the same benchmark regions of the vector mediator scenario essentially gets through modulo numerical prefactors. By contrast, the linear E r dependence can survive for the (iii) region, i.e., |A| 2 ∝ 2m 2 1 + m e E r , and as a result, the recoil energy spectrum can be of rising-and-falling shape like the (ii) and (iii) regions of the PS case. In the SS case, |A| 2 ∝ m 2 e m 2 1 = const., so the overall expectations can be referred to those in the VF case except the fact that the scattering cross-sections are much smaller than those in the VF case for a given set of mass values and coupling strengths. Discussions In this section, we discuss implications of our findings: fit parameter consistency with existing limits, scattering of BDM on xenon nuclei, and a potential impact of the inclusion of the ionization form factor. As mentioned before, the quoted parameter values to explain the XENON1T excess may be in tension with existing bounds. Identifying V as a dark photon and considering JHEP05(2021)055 BP1 in figure 3, we find that the (m V , g V e ) pair is safe from the existing bounds. In terms of the standard kinetic mixing parameter , g V e = 2.4 × 10 −4 is translated to = g V e /e = 7.9 × 10 −4 which is not yet excluded by the latest limits [42]. Here e = √ 4πα ≈ 0.30 is the electric charge, where α is the fine structure constant. However, the parameter values for BP2 and BP3 are strongly constrained by the limits from various astrophysical searches. The same tension arises for BP2 in the right panel of figure 5 with a identified as, say axionlike particle. Indeed, it was argued that there are ways to circumvent those astrophysical bounds that would rule out such dark photons and axion-like particles. The main idea is that if the coupling constant and the mass parameter have effective dependence upon environmental conditions of astrophysical objects such as temperature and matter density, which are very different in the XENON1T experiment, the limits can be relaxed by several orders of magnitude [43][44][45][46]. There are several works to discuss relevant mechanisms in the context of specific particle physics models, e.g., refs. [47][48][49][50][51][52][53], for which concise summaries are referred to refs. [54,55]. Furthermore, ref. [46] pointed out that the energy loss process inside the stellar medium could be quenched because of absorption for large values of coupling. Therefore, a careful check is needed to see if these parameter points are disfavored by the astrophysical bounds. Finally, in regard to the (m a , g a e ) values for BP1 in the right panel of figure 5, it seems that there are no existing searches that are sensitive to this parameter point to the best of our knowledge. However, due to a relatively large size of coupling we expect that existing or near-future laboratory-based experiments such as accelerator experiments can test this parameter point. Moving onto the second issue, one may ask whether BDM would scatter off a xenon nucleus and whether this dark matter interpretation would be contradictory to the null signal observation in the nuclear recoil channel at the XENON1T detector. A possible solution is to assume that the mediator is "baryo-phobic" or "electro-philic". Aside from model dynamics, we can check this issue using kinematics. The maximum kinetic energy of a recoiling xenon nucleus E max r,Xe is simply given by eq. (3.3) with m e replaced by m Xe and with s approximated to m 2 Xe . For BP1 and BP2, p 1 ≈ 630 keV gives E max r,Xe ≈ 6 × 10 −3 keV, whereas for BP3 p 1 = 10 MeV results in E max r,Xe ≈ 1.6 keV. Therefore, XENON1T is not sensitive enough to the dark matter signals from the three benchmark points in the nucleus scattering channel. However, if E 1 increases, XENON1T starts to be sensitive to the signals belonging to region (iii) in the nucleus scattering channel, allowing for complementarity between the electron and nucleus recoil channels. Finally, we would like to comment on the effects of the ionization form factor. In general, the form factors fall steeply with the momentum recoil, and therefore the ionization form factor strongly biases the scattering towards low-momentum recoil. In addition, the form factor does not necessarily fall monotonically and thus could modify the recoil energy spectrum [35][36][37][56][57][58][59]. The ionization factor can be calculated by using the Roothaan-Hatree-Fock wave function for the initial state electron [56] and applying the plane wave approximation for the final state electron. We have followed the procedure described in refs. [34,57] to compute the ionization form factor for the interaction between BDM and the electrons in a xenon atom. We consider three outermost orbitals (5p, 5s, and 4d), with respective binding energies ∼12, 26 and 76 eV, which are known to be the Events/(t.y.keV) 1 e = 3.3 × 10 35 cm 2 , g V e = 2.4 × 10 4 1 e = 8.6 × 10 35 cm 2 , g V e = 3.5 × 10 9 1 e = 1.9 × 10 35 cm 2 , g V e = 5.0 × 10 8 Figure 6. Sample energy spectra including the ionization factor for the same benchmark mass spectra and particle spins (VF case) as in figure 2. Three dashed curves are the same as those appearing in figure 3, which already incorporate the detector resolution and the detector efficiency. The solid curves are the corresponding one further including effects of the ionization factor by the electrons in three outer shells, in addition to the detector resolution and efficiency. dominant contribution [36,37]. As a cross-check, we have reproduced relevant results such as the ionization form factor from each shell and the differential recoil spectra for some physics examples as in refs. [36,37]. We have also compared our approach against more sophisticated method where the final electron state is described by a positive energy continuum solution of the Schrödinger equation with a hydrogen potential [58][59][60]. We find that the plane wave approximation provides a reasonably good approximation for the low-momentum transfer as noted in refs. [59,60]. In figure 6 we show the energy spectra including the ionization factor for the same benchmark mass spectra and particle spins (VF case) as in figure 2. Three dashed curves represent the energy spectra in figure 3, which already incorporate the detector resolution and the detector efficiency, while the solid curves take into account effects of the ionization factor by considering 18 electrons in three outermost orbitals, as well as the detector resolution and efficiency. As the detector efficiency and resolution affect the shape of the energy spectra, the ionization form factor also gives additional distortion. Nevertheless, the effects of the ionization factor in the shape of energy spectra are mild and the main features remain very similar. We find that the spectra for other particle spins also remain very similar to those in figure 5 [35]. While it is interesting and informative to quantify the impacts of the three factors, detector efficiency, detector resolution, and the ionization factor, on the theoretical shapes, it is beyond the scope of our study here and we reserve this subject for an upcoming study [35]. Conclusions The dark matter interpretation for the XENON1T anomaly is in favor of the existence of fast-moving or boosted dark matter component(s) in the present universe, which may JHEP05(2021)055 . γ BDM denotes the Lorentz boost factor of BDM. and indicate that one can find mass spectra to reproduce the XENON1T excess and satisfy the conditions of the associated regions, while for entries marked with a certain range of mediator mass may not reproduce the XENON1T excess. By contrast, indicates that it is generally hard to find a mass spectrum to explain the excess. The general shape of expected recoil energy spectra is described in the parentheses. require non-conventional dark matter dynamics. We investigated various cases in which such dark matter of spin 1/2 and 0 interacts with electrons via the vector, axial-vector, pseudo-scalar, or scalar mediators in the context of the two-component boosted dark matter model as a concrete example. Our findings are summarized in table 2. We found that there exist a set of parameter choices to be compatible with existing bounds as well as to accommodate the anomaly. In particular, the scales of mass and coupling parameters are sensitive to the mediator choice. Our study further suggested that with appropriate choices of mediator and its mass, significantly boosted dark matter can be allowed on top of the moderately fast-moving dark matter. Finally, we emphasize that the analysis method that we have proposed in this work is general, so we expect that it is readily applicable to the interpretation of observed data in other dark matter direct detection experiments. Note added. We confirm that our total cross-section formula in eq. (4.5) agrees with the corresponding expression in the updated version of ref. [28]. JHEP05(2021)055 support from the National Research Foundation of Korea (NRF-2020R1I1A3072747). This paper was supported by research funds for newly appointed professors of Jeonbuk National University in 2019. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
10,141.6
2020-06-29T00:00:00.000
[ "Physics" ]
Fos-related Antigen 2 Controls Protein Kinase A-induced CCAAT/Enhancer-binding Protein β Expression in Osteoblasts* Transcription factor CCAAT/enhancer-binding protein β (C/EBPβ) plays an important role in hormone-dependent gene expression. In osteoblasts C/EBPβ can increase insulin-like growth factor I (IGF-I) transcription following treatment with hormones that activate protein kinase A, but little is known as yet about the expression of C/EBPβ itself in these cells. We initially showed that prostaglandin E2 (PGE2) rapidly enhances C/EBPβ mRNA and protein expression, and in this study we identified a 3′-proximal region of the C/EBPβ promoter containing a 541-bp upstream sequence that could account for this effect. PGE2-dependent activation of C/EBPβ was blocked by expression of a mutated regulatory subunit of protein kinase A or by mutation of two previously identified cAMP-sensitive cis-acting regulatory elements within the promoter between bp –111 and –61. Nuclear protein binding to these elements was induced by PGE2, required new protein synthesis, and was sensitive to antibody to the transcription factor termed Fos-related antigen 2 (Fra-2). Fra-2 cDNA generated from rat osteoblasts by reverse transcriptase PCR was 95% homologous to human Fra-2, and PGE2 rapidly induced Fra-2 mRNA and protein expression. Consistent with these findings, over-expression of Fra-2 significantly increased C/EBPβ promoter activity in PGE2-induced osteoblasts, whereas expression of Fra-2 lacking its activation domain had a dominant negative inhibitory effect. Together, these results reveal a significant, hormone-dependent role for Fra-2 in osteoblast function, both directly, through its ability to increase new C/EBPβ gene expression, and indirectly, through downstream C/EBP sensitive genes. Many cells and tissues express one or another of the several C/EBP 1 transcription factor gene family members, termed C/EBP␣, -␤, -␦, -␥, and -⑀ (1,2). Individual C/EBPs can form homodimers or heterodimers and share common DNA binding response elements, consistent with the high degree of homology in their carboxyl termini where their dimerization and DNA binding domains reside. Of these, basal expression of C/EBP␤ is high in liver, intestines, differentiating adipocytes, lung, kidney, and spleen, as well as in monocytic blood cells. However, basal expression of C/EBP␤ is relatively low in osteoblasts, but it can be enhanced by treatment with glucocorticoid, PGE 2 , or 1,25(OH) 2 vitamin D 3 (3)(4)(5). 2 Because the various C/EBPs are widely expressed, it is no surprise that they direct the synthesis of a large panel of target genes. In osteoblasts either C/EBP␦ or C/EBP␤, which are variably expressed in several osteoblastic cell models, can in turn activate the expression of several prominent downstream genes, including those encoding IGF-I, IGFBP-5, IL-6, osteocalcin, and cyclooxygenase 2 (4, 6 -9). Earlier we reported that Runx2, a transcription factor essential for osteogenesis (10,11), is an important, direct regulator of C/EBP␦ expression in osteoblasts, by way of a Runx binding sequence located between bp Ϫ165 to Ϫ159 in the C/EBP␦ gene promoter (12). Moreover, through an apparent negative feedback inhibition, the carboxyl-terminal region of C/EBP␦ can bind directly to Runx2 and in this way self-limit C/EBP␦ expression and activity. Others have reported roles for STAT3, Sp1, and C/EBP␦ itself in the regulation of C/EBP␦ expression in other cell models (13)(14)(15)(16). By contrast, molecular mediators that direct C/EBP␤ gene expression have been better established in nonskeletal tissue-derived cells. For example, studies in hepatocytes defined two cAMP-responsive elements (CREs) that are located between bp Ϫ121 and Ϫ71 in the C/EBP␤ promoter and can interact with CREB and C/EBP␤ to drive C/EBP␤ gene expression. In those cells, lipopolysaccharide increases C/EBP␤ expression through shared or distinct elements that require c-Jun and ATF-2, whereas IL-6-dependent induction of C/EBP␤ involves an indirect association of STAT3 to these CRE sequences (17)(18)(19). Importantly, agents or events associated with trauma, inflammation, and the acute phase response have critical effects on C/EBP␤ synthesis, perhaps to assist the expression of downstream genes associated with recovery and tissue repair (1,2). In osteoblasts, we earlier reported an increase in C/EBP␤ expression in response to PGE 2 (3). In the current study, we have characterized the molecular mediators that can account for this effect. We demonstrate the relative importance of a specific protein kinase system and the two previously identified cisacting CREs that drive new C/EBP␤ synthesis. Finally, we show that one trans-acting transcription factor that can control this event in differentiating osteoblasts is distinct from those * This study was supported by Public Health Service Grants DK56310 and AR39201. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) AY622611. EXPERIMENTAL PROCEDURES Cell Culture-Primary osteoblast-enriched cultures were prepared from parietal bones of 22-day-old Sprague-Dawley rat fetuses (Charles River Breeding Laboratories) by methods approved by the Yale Institutional Animal Care and Use Committee. Bone sutures were dissected, and cells were released from the bone fragments by five sequential collagenase digestions. Cells pooled from the last three digestions express many biochemical features that typify differentiating osteoblasts, including high levels of nuclear factor Runx2, parathyroid hormone (PTH) receptor, type I collagen synthesis, and alkaline phosphatase activity (20 -22). They also exhibit an increase in osteocalcin expression in response to vitamin D 3 , differential sensitivity to transforming growth factor-␤, bone morphogenetic protein-2, and various prostaglandins and form mineralized nodules in vitro (23)(24)(25)(26)(27)(28). Cells were plated at 4,000/cm 2 in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum. COS-7 cells (CRL 1651) from the ATCC were cultured in identical medium. Hormone treatments were performed in serum-free medium. Plasmids-C/EBP␤ promoter constructs, prepared from a library of genomic rat DNA, were based on earlier reported sequence information (Ref. 29 and GenBank TM accession no. AY056052). Mutations were created in two previously described CREs within the C/EBP␤ promoter (17) by overlap PCR and mutated oligomer primer pairs. The primers used to mutate CRE1 were: CRE1 forward, 5Ј-CGCGGCCGGGCAA-TGGTTCGCACCGACCCGG-3Ј; and CRE1 reverse, 5Ј-CGCCGGGT-CGGTGCGAACCATTGCCCGGCCG-3Ј. Primers used to mutate CRE2 were: CRE2 forward, 5Ј-GGGAGGGGCCCCGGCGGATCCCAGCCC-GTTGCCAGG-3Ј; and CRE2 reverse, 5Ј-CGCCTGGCAACGGGCTGG-GATCCGCCGGGGCCCCT-3Ј (mutated nucleotides are indicated by bold underlined italics). A Fos-related antigen 2 (Fra-2) expression plasmid was prepared from total rat RNA using the C.therm. Polymerase One-step RT-PCR System (Roche Applied Science) with forward primer GAGAATTCGGGAAATGTACCAGGATTATCCCGGG and reverse primer GCTCTAGATTACAGAGCCAGCAGAGTGGGG based on rat Fra-2 sequence information in GenBank TM (accession no. NM_012954), utilizing the EcoRI and XbaI restriction sites (underlined) for directional cloning. An expression plasmid encoding dominant negative Fra-2 was produced from this construct by reverse transcriptase PCR to delete amino acids 208 -328, comprising its transactivation domain. Transfections-Promoter-reporter constructs, gene expression plasmids, or empty parental vectors were pre-titrated for optimal expression efficiency and transfected with reagent TransIT LT1 (Mirus). Cells at 50 -70% culture confluence (25,000 -30,000/cm 2 ) were exposed to an optimal amount of expression plasmid (10 -20 ng/cm 2 ) or reporter plasmid (50 ng/cm 2 ) in medium supplemented with 0.8% fetal bovine serum for 16 h and then supplemented to obtain a final concentration of 5% serum. Cells were cultured for 48 h, treated as indicated in each figure in serum-free medium, rinsed, and lysed. Nuclear-free supernatants were analyzed for reporter gene activity and corrected for protein content. To account for competition among plasmids for limiting transcription components, control cells were transfected with a compensating amount of empty vector. Transfection efficiency was assessed in parallel with positive and negative reporter plasmids as described previously (28). RNA Analysis-Total RNA was extracted with acid-guanidine-monothiocyanate, precipitated with isopropyl alcohol, and dissolved in sterile water (30). mRNA levels were assessed by Northern blot analysis using 10 g of RNA denatured in formaldehyde/formamide. Co-electrophoresed RNA standards were used to verify transcript size. Restriction fragments including cDNA inserts encoding rat C/EBP␤ or rat Fra-2 were isolated by agarose gel electrophoresis and a QIAquick gel extraction kit (Qiagen Corp.) were labeled with [␣-32 P]dCTP and [␣-32 P]dTTP by random hexanucleotide-primed second strand synthesis to use as Northern blot probes. Post-hybridization stringency wash was with 0.2ϫ SSC and 0.1% SDS for 1 h at 55°C. In some instances, after hybridization and autoradiography, primary probes were stripped, and the blots were re-hybridized with a 32 P-labeled 18 S rRNA probe of 80 nucleotides, prepared with the T7 MEGAshortscript kit (Ambion) to assess equal RNA loading and blotting. In other instances, rRNA levels were assessed by staining with ethidium. Nuclear Extracts-Cells were rinsed, harvested, and lysed in a hypotonic buffer supplemented with phosphatase and protease inhibitors and 1% Triton X-100. Nuclei were collected and resuspended in hypertonic buffer with phosphatase and protease inhibitors, and soluble nuclear proteins released after 30 min of extraction were collected by centrifugation as described (3,31,33). Western Immunoblots-Total cell or nuclear extracts were fractionated through SDS-PAGE and electroblotted onto PolyScreen polyvinylidene difluoride transfer membrane (PerkinElmer Life Sciences) with pre-stained molecular weight markers. Blots were blocked in 5% fatfree powdered milk and probed with specific primary antibodies (Santa Cruz Biotechnologies), and reactive bands were visualized with secondary antibody linked to horseradish peroxidase and chemiluminescence (Western Lightning, PerkinElmer Life Sciences) (3,33). Statistics-Statistical differences in biochemical assays were assessed by one-way analysis of variance and Student-Newman-Keuls post hoc analysis, using SigmaStat software (Jandel Corp.), from a total of nine or more replicate samples from three or more studies each performed with different cell preparations. A significant difference was assumed by a p value of Ͻ0.05. PGE 2 Induces C/EBP␤ Expression-We previously showed that C/EBP is an important regulator of IGF-I expression in PKA-activated osteoblasts, by way of a single high affinity C/EBP binding half-site located within exon 1, a transcribed, noncoding, and highly conserved region of the IGF-I gene. C/EBP␦ is the principal endogenous C/EBP in unstimulated osteoblasts (4). Close examination of in vitro binding using the IGF-I promoter derived C/EBP binding element, designated HS3D, and nuclear extract from PGE 2 -activated osteoblasts, showed two prominent complexes and suggested the presence of multiple proteins. As shown in Fig. 1A, antibodies to either C/EBP␤ or C/EBP␦ each effectively reduced protein binding to this element. The upper gel shift complex that occurred with extract from PGE 2 -activated osteoblasts was primarily sensitive to anti-C/EBP␤ antibody, whereas both complexes were reduced by anti-C/EBP␦ antibody, consistent with the importance of heterodimers containing both C/EBP isoforms. Treatment with PGE 2 rapidly elevated the levels of C/EBP␤ mRNA and protein. A large increase in C/EBP␤ mRNA occurred within 1 h of treatment, peaked at 2 h, and declined but remained significantly elevated for at least 24 h (Fig. 1B). By Western immunoblot analysis, a maximal increase in C/EBP␤ protein in total osteoblast extract was achieved by 4 h of PGE 2 treatment (Fig. 1C). Locating the PGE 2 -responsive Element in the C/EBP␤ Promoter-To locate regulatory elements utilized by osteoblasts after PGE 2 activation, cells were transfected with reporter plasmids encoding progressive C/EBP␤ promoter truncations. The C/EBP␤ promoter fragments shared a common 3Ј-end at bp ϩ54 but terminated at bp Ϫ2700, Ϫ1300, or Ϫ541 at their 5Ј-ends. As shown in Fig. 2A, gene expression through each fragment was significantly induced by 6 h of treatment with PGE 2 . Earlier evidence from studies with hepatocytes showed two important cAMP-sensitive elements downstream of bp Ϫ541, located between bp Ϫ111 and Ϫ61 (17), allowing us to focus our effort more precisely. Indeed, we found that mutation of either or both of these elements within the context of the Ϫ541 upstream segment significantly reduced basal C/EBP␤ promoter activity in osteoblasts and severely limited the stimulatory effect PGE 2 by 75-80%. To assess the kinase systems responsible for the stimulatory effect of PGE 2 , we co-transfected osteoblasts with the fully functional Ϫ541 C/EBP␤ promoter-reporter construct and either a mutant regulatory subunit of PKA that blocks its activation (PKAreg) or a dominant negative PKC (PKC DN ) that exerts broad-spectrum PKC isoform inhibition (34,35). Expression of the mutant regulatory subunit of PKA completely blocked C/EBP␤ promoter activation, whereas expression of the PKC DN protein had no effect (Fig. 2B). Inducible Protein Binding to CRE1 and CRE2-To assess protein binding to these important CRE elements that occur in the C/EBP␤ promoter, we performed EMSA with 32 P-labeled oligonucleotides, designated CRE1 and CRE2, corresponding to each element. Nuclear extract from osteoblasts treated for 4 h with PGE 2 caused inducible gel shift complexes with each element, and these complexes were competed by excess unlabeled homologous oligonucleotides but not by oligonucleotides in which the CRE1 binding sequences were mutated. Data are shown in Fig. 3A for [ 32 P]CRE1 and unlabeled CRE1 (1), but analogous results occurred with [ 32 P]CRE2 and mutated CRE2. Unlabeled oligonucleotide HS3D, corresponding to the C/EBP binding element in the IGF-I gene (as described in Fig. 1A), did not compete for protein binding to either CRE probe even at a 100-fold molar excess, and neither anti-C/EBP␤ nor anti-C/EBP␦ antibody reduced complex formation (Fig. 3B). However, although oligonucleotides corresponding to consensus CREB binding sequences effectively competed for nuclear factor binding to these CREs (Fig. 3B), anti-CREB antibody did not alter complex formation (Fig. 3A). Therefore, these CREs appear to have an important effect on PKA-dependent C/EBP␤ expression in osteoblasts, and transcription factors other than C/EBP or CREB appear to associate with these elements after treatment with PGE 2 . Fra-2 Binds to CRE1 and CRE2-PGE 2 activated the C/EBP␤ promoter through a PKA-dependent event, and we speculated that these CREs might bind AP-1-like factors based on their nucleotide sequences and previous evidence from studies in osteoblasts (36). Select AP-1 transcription factors are expressed during osteoblast differentiation (37)(38)(39)(40). Therefore we used a panel of antibodies to various AP-1-binding proteins, C/EBP␦, C/EBP␤, CREB (as shown in Fig. 3), ATF-2, and JunD. 2 However, only antibodies specific to the transcription factor Fra-2 effectively modified binding to each of these elements. Again, analogous results occurred with oligonucleotides encoded by element CRE-1 and CRE-2, and data from results obtained using the oligonucleotide specific for CRE-1 are shown in Fig. 4A. Western immunoblot analysis showed that PGE 2 induced a rapid accumulation of Fra-2 (Fig. 4B) that was completely blocked by co-treatment with the protein synthesis inhibitor cycloheximide (Fig. 4C). Using sequence information obtained from GenBank TM , we designed primers to synthesize full-length cDNA encoding rat Fra-2 by reverse transcriptase PCR from total osteoblast RNA. The rat Fra-2 sequence that we cloned was 95% homologous to human Fra-2. All sequence variations appeared to be minor, and the predicted gene products retained a protein sequence homology of 96% (Fig. 5). FIG. 1. PGE 2 induces the expression of C/EBP␤ in rat osteoblasts. A, nuclear extracts from osteoblasts treated for 4 h in serumfree medium with vehicle (0) or 1 M PGE 2 (P) were examined by EMSA with 32 P-labeled oligonucleotide HS3D, corresponding to the C/EBP binding site in the rat IGF-I gene without (0) or with nonimmune rabbit Ig, anti-C/EBP␤ (␤), or anti-C/EBP␦ (␦) antibody (ab) as indicated. B, total RNA from osteoblasts treated with vehicle (0) or 1 M PGE 2 for the time periods indicated was fractionated by agarose gel electrophoresis, blotted onto charge-modified nylon, and probed with 32 P-labeled fulllength C/EBP␤ cDNA. The membrane was stripped and reprobed with low specific activity 32 P-labeled probe for 18 S rRNA. Binding was assessed by autoradiography. C, total osteoblast extracts were fractionated by SDS-PAGE through a 12.5% Laemmli gel under reducing conditions and probed with rabbit anti-C/EBP␤ polyclonal antiserum. FIG. 2. Functional CREs in the proximal region of the C/EBP␤ gene promoter and PKA dependent activation in rat osteoblasts. A, osteoblasts were transfected with 250 ng of reporter plasmids containing fragments of the rat C/EBP␤ promoter with 5Ј-termini at Ϫ2700, Ϫ1300, or Ϫ541 bp and a common 3Ј-terminus at ϩ54 bp in a total of 500 l. Mutations introduced into C/EBP␤ promoter fragment Ϫ541 to ϩ54 to disrupt either or both previously identified CREs between Ϫ111 and Ϫ61 bp are designated as Ϫ5411, Ϫ5412, and Ϫ5411,2. B, osteoblasts were co-transfected to express 250 ng of the native Ϫ541-bp fragment of the C/EBP␤ promoter and 100 ng of either a mutated regulatory subunit of PKA (PKAreg) or a dominant negative PKC␣ subunit (PKC DN ) in a total of 500 l. Reporter gene activity was measured after 6 h of treatment with vehicle (control) or 1 M PGE 2 . Data were corrected for protein content and are the means Ϯ S.E. from 9 or more replicate cultures per condition and 3 or more experiments. *, basal C/EBP␤ promoter activity and the stimulatory effect of PGE 2 were significantly suppressed (p Ͻ 0.05) by either or both CRE mutations (in A) and only by PKAreg in PGE 2 -treated osteoblasts (in B). Using the rat Fra-2 cDNA as a probe, we then examined Fra-2 mRNA by Northern blot analysis. Fra-2 mRNA levels increased within 30 min of PGE 2 treatment, peaked at 1 h, and remained elevated for at least 4 h (Fig. 4D). Fra-2 Regulates C/EBP␤ Promoter Activity in PGE 2 -induced Osteoblasts-We produced expression plasmid constructs encoding full-length Fra-2 and a carboxyl-truncated dominant negative Fra-2 devoid of its carboxyl transactivation domain (41). The full-length and dominant negative Fra-2 were then co-transfected with either the fully active Ϫ541-bp C/EBP␤ promoter plasmid or that containing the mutated CRE1 and CRE2 elements. Full-length Fra-2 significantly enhanced PGE 2 -induced C/EBP␤ promoter activation in a concentrationdependent fashion (Fig. 6A), whereas dominant negative Fra-2 significantly reduced C/EBP␤ promoter activation by 50% (Fig. 6B). Consistent with the importance of these CREs for Fra-2 activity, full-length Fra-2 failed to enhance gene expression in cells co-transfected to express C/EBP␤ promoter that contained the CRE1/CRE2 mutations (Fig. 6A). The dominant negative expression construct encoding Fra-2 also suppressed basal C/EBP␤ gene promoter activity (Fig. 6B). This effect may be direct through other currently unidentified Fra-2 binding elements, or it may be indirect through the formation of inactive heterodimer complexes between other important trans-acting factors and the dominant negative Fra-2 protein. In either case, these results confirm that C/EBP␤ expression is transcriptionally activated by PGE 2 through the CRE1 and CRE2 AP-1 binding elements and reveal the stimulatory effect at these sites by newly synthesized Fra-2 in PKA-activated osteoblasts. DISCUSSION C/EBP␤ and C/EBP␦ are important components in the development of the acute phase response and participate in the induction of many genes involved in tissue remodeling (1,2). In osteoblasts, C/EBPs can activate the expression of several important gene products, including IGF-I, IGFBP-5, IL-6, osteocalcin, and cyclooxygenase 2 (4, 6 -9). C/EBP␦ is the predominantly expressed C/EBP in unstimulated rat and human osteoblasts (4). In this regard, we earlier reported that Runx2, an essential transcription factor required for osteoblast differentiation, is responsible for basal and PGE 2 -induced C/EBP␦ expression (12). Although basal C/EBP␤ expression in osteoblasts is relatively low, PGE 2 or any hormone, such as PTH, or agent, such as forskolin, that elevates cAMP levels and activates PKA rapidly induces its expression. A novel role for C/EBP␤ in the regulation of cell survival recently has been suggested. In hepatic stellate cells, oxidative stress activates ribosomal protein S-6 kinase, which can phosphorylate C/EBP␤ on threonine 217, creating a functional socalled XEXD caspase substrate inhibitory box (42,43). This evidence shows a direct link between C/EBP␤ threonine 217 phosphorylation and an association with procaspase-1 and -8, which inhibits apoptosis. Therefore, C/EBP␤ may have profound effects on cell survival as well as gene transcription. Our study identified two Fra-2-dependent CREs in the C/EBP␤ gene promoter and showed that they are responsible for PKA-dependent activation by PGE 2 in osteoblasts. These elements were originally identified as CREB-binding elements in hepatocytes, whereas more recent studies indicate a potential auto-regulatory role for C/EBP␤ through the more upstream binding sequence that we designated as CRE1 (17,18). However, we found no CREB, C/EBP␤, or C/EBP␦ binding to either CRE in PGE 2 -treated osteoblasts by EMSA. Moreover, when we transfected osteoblasts to over-express CREB in combination with the C/EBP␤ promoter, we did not detect an increase in reporter gene expression. 2 It is important to note that the increase in C/EBP␤ expression in IL-6-activated hepatocytes is also thought to occur through these CREs, where the formation of a complex containing activated STAT3 is tethered to an unidentified 68-kDa protein that associates with each of these elements (19). Therefore, multiple observations, including our current findings, show the importance of both CREs in the activation of C/EBP␤ gene expression in cells from several tissue sources, albeit through different trans-acting proteins. In PGE 2 -activated osteoblasts, Fra-2 binds directly to these CREs in the C/EBP␤ promoter. The increase in C/EBP␤ gene expression by PGE 2 required PKA activation and ongoing protein synthesis, consistent with the low level of Fra-2 in unstimulated osteoblasts and its rapid induction by PGE 2 . Overexpression of full-length Fra-2 further enhanced C/EBP␤ promoter activity in osteoblasts activated with PGE 2 , whereas truncated, dominant negative Fra-2 severely suppressed this response. These gain-and loss-of-function effects confirm an important if not unique role for Fra-2 in the induction of C/EBP␤ expression in differentiating osteoblasts. Furthermore, the activation domain of Fra-2 has several known phosphorylation sites (44 -47). Therefore, post-translational kinasedependent modification may play an important role in Fra-2dependent C/EBP␤ activation in osteoblasts, which will be the subject of our future studies. The protein sequence predicted from the rat Fra-2 cDNA that we cloned retains 95.4% homology to human Fra-2 and 99% homology to the published rat Fra-2 nucleotide sequence (GenBank TM accession numbers NM_005253 and NM_012954, respectively). However, a comparison among the various Fra-2 sequences available shows that in four of five instances, amino acids predicted by our sequence that differ with the previously reported rat Fra-2 sequence are identical to those that occur in human Fra-2. Thus, some differences may relate to species, strain, or tissue variability. The sequence that we derived from rat osteoblast cDNA has been deposited in GenBank TM (accession no. AY622611). Analogous to the C/EBPs, CREB, the ATFs, and Ap-1 factors c-Fos and c-Jun, Fra-2 is a member of the bZip (basic leucine zipper) family of transcription factors. Each member of this protein family appears to function as a dimer and can form homodimers or heterodimers with other select family members (1). Polyclonal anti-Fra-2 antibody modified nuclear protein binding to the CREs found in the C/EBP␤ promoter by EMSA, but we did not detect C/EBP␦, C/EBP␤, CREB, ATF-2, or JunD in the gel shift complex using specific antibodies and similar methods. 2 Unlike the C/EBPs, the transactivation domain of Fra-2 occurs in the carboxyl-terminal region, which contains potential phosphorylation sites, whereas its leucine zipper dimerization domain is centrally located, and its DNA binding domain resides in the amino-terminal region (41, 48 -50). Therefore, this dissimilar organization of Fra-2 protein structure, at least by comparison with the C/EBPs (51,52), suggests that it may have a restricted pattern of functional binding partners (50,53,54). Additional studies will be necessary to determine whether Fra-2 acts as a homodimeric transactivator of C/EBP␤ gene expression in osteoblasts or to identify other potential binding partners for Fra-2 in this context. Expression of Fra-2 during mouse embryonic development FIG. 4. PGE 2 induces Fra-2 expression in rat osteoblasts. A, nuclear extracts from osteoblasts treated for 4 h in serum-free medium with vehicle (0) or 1 M PGE 2 (P) were examined by EMSA with 32 P-labeled oligonucleotide CRE1 as described for Fig. 3 and either nonimmune Ig or anti-Fra-2 antibody (ab). B, total osteoblast extract from vehicle or PGE 2 -treated osteoblasts for the time periods indicated was fractionated by SDS-PAGE through a 12.5% Laemmli gel under reducing conditions and probed with rabbit anti-Fra-2 polyclonal antiserum. C, total nuclear extracts from osteoblasts treated with PGE 2 without or with cycloheximide (Cyclohex), as indicated, were assessed as described in B. D, total RNA from osteoblasts treated with vehicle (0) or 1 M PGE 2 for the time periods indicated was fractionated by agarose gel electrophoresis, blotted onto charge-modified nylon, probed with 32 P-labeled full-length Fra-2 cDNA, and assessed by autoradiography. Parallel samples were stained with ethidium to detect 18 S and 28 S rRNA bands. FIG. 5. Comparison of amino acid sequences corresponding to Fra-2 from human and rat cells. The sequence of rat osteoblast Fra-2 protein was derived from the cDNA cloned from primary cultures of fetal rat osteoblasts. The predicted protein sequence from rat osteoblast cDNA (Fra2_Rat_OB) was compared with Fra-2 cloned from human monocytic cells (Fra2_Human_Mono) and rat pineal gland (Fra2_Rat_PIN) using ClustalW software from the European Bioinformatics Institute (www.ebi.ac.uk). By the criteria of this software program, an asterisk below the sequence means that the residues in that column are identical in all sequences in the alignment. A colon below the sequence means that conserved substitutions were observed. A period below the sequence means that semi-conserved substitutions were observed. reveals temporal and spatial variations. It appears late in organogenesis where it occurs in developing cartilage, including the bony and cartilaginous sides of the growth plate, mandibles, and ribs and in the central nervous system (55). Differentiated osteoblasts express Fra-2, and experiments with Fra-2 antisense reveal significant suppression of the differentiated osteoblast phenotype and a diminished bone tissue-like organization in Fra-2 antisense-treated cell cultures (38). As we saw with PGE 2 in primary rat osteoblasts, PTH stimulates Fra-2 expression in the murine preosteoblastic cell line MC3T3-E1 (40). PTH alters the expression of many downstream genes in osteoblasts (56). These effects, which may vary with regard to concentration and duration of PTH treatment, involve multiple signal pathways (11,36,57) analogous to several activators of PKA in other tissue-derived cells (58). Moreover, transforming growth factor-␤, fibroblast growth factor-2, and mechanical loading also regulate Fra-2-dependent gene expression in vivo or in vitro in bone or bone cells models (39, 59 -61). Recent in vivo studies supports these in vitro observations in osteoblasts, indicating that Fra-2 may have a significant effect on the developing or remodeling skeleton (32). Over-expression of Fra-2 in mice increased the bone volume and bone formation rate, even in the absence of changes in osteoblast number. In the opposite situation, animals lacking a functional Fra-2 gene exhibited growth retardation, severe osteoporosis, and perinatal death. Osteoblast number was also unaffected in Fra-2deficient mice, but osteoclast number and size were increased. Even so, fewer osteocalcin-expressing osteoblasts occurred in Fra-2-deficient animals. This is consistent with the view that osteocalcin is a target gene for C/EBP␤ and that loss of Fra-2 would consequently lower the expression of C/EBP␤ or its induction by hormones such as PGE 2 and PTH that effect bone remodeling. In summary, our current studies reveal that C/EBP␤ gene expression in osteoblasts is regulated by activation of PKA through two CREs that occur within a downstream region of the C/EBP␤ gene promoter and that associate with newly synthesized transcription factor Fra-2. These results, in combination with our earlier studies revealing Runx-2-dependent expression of C/EBP␦, continue to define the complex molecular events (modeled in Fig. 7) that consequently control hormonedependent changes in expression of the important bone growth factor, IGF-I. FIG. 6. Over-expression of native or dominant negative Fra-2 modifies C/EBP␤ gene promoter activity in rat osteoblasts. Osteoblasts were transfected with the indicated amounts of pcDNA3 expression plasmids encoding full-length rat Fra-2 cDNA (A) or dominant negative Fra-2 (Fra-2-DN) produced by carboxyl-terminal truncation to delete amino acids 208 -328 (B) in combination with 250 ng of reporter plasmids driven by either the native Ϫ541-bp C/EBP␤ promoter fragment or plasmid 1,2 containing mutations in both CRE1 and CRE2 in a total of 500 l. Compensating amounts of the parental expression vector pcDNA3 were added to balance the total plasmid load. Cells were treated for 6 h with vehicle (control) or 1 M PGE 2 as indicated. Reporter gene activity was measured after 6 h of treatment with vehicle (control) or 1 M PGE 2 . Data were corrected for protein content and are the means Ϯ S.E. from 9 or more replicate cultures per condition and 3 or more experiments. *, the stimulatory effect of PGE 2 was significantly enhanced (p Ͻ 0.05) in cells transfected with 100 -150 ng of native Fra-2 and significantly suppressed (p Ͻ 0.05) in cells transfected to express 100 -300 ng of dominant negative Fra-2. FIG. 7. PKA-dependent control of IGF-I gene expression in osteoblasts. A, osteoblasts exposed to hormones such as PGE 2 and PTH, having receptors that couple to adenylate cyclase, increase cAMP synthesis and enhance PKA activity, in turn increasing IGF-I mRNA transcription through a C/EBP-sensitive element in exon I within the IGF-I gene. B, previous studies showed that this occurs in part through a translation-independent effect on the activation of pre-existing C/EBP␦ and C/EBP␤ (central bifurcated arrow) (3,5) and through a Runx-2-dependent transcriptional effect on new C/EBP␦ gene expression (left column) (12). Studies in this report demonstrate a parallel Fra-2-dependent transcriptional effect on C/EBP␤ gene expression (right column) revealing complex re-enforcing effects on IGF-I synthesis through increases in both C/EBP expression and activity. The question mark indicates a current gap in our knowledge about the molecular events that control Fra-2 synthesis.
6,756.4
2004-10-08T00:00:00.000
[ "Biology", "Medicine" ]
Hydroxyapatite: a promising hemostatic component in orthopaedic applications Agent with both great blood clotting activity and bone regeneration ability is deserved to replace conventional bone wax. Recently, hydroxyapatite (HA) has attracted interests from researchers with its both hemostatic and bone healing functions. In present work, the blood clotting activity comparisons of HA to other potential bone repairing materials including calcium silicate, calcium combined attapulgite, calcium tripolyphosphate, and chitosan were carried out to show HA as a recommended hemostatic component to replace bone wax. In addition, the impacts of HA synthesis routes on its blood clotting activity were evaluated, indicating increase of surface area as well as active Ca2+ of HA can greatly enhance blood clotting. With these attributes, it is expected HA can be a promising component in fabricating hemostatic materials in orthopedic applications as alternatives to bone wax. Correspondence to: Huan Zhou, Institute of Biomedical Engineering and Health Sciences, Changzhou University, Changzhou, China, Tel: (86)0519-86330103; E-mail<EMAIL_ADDRESS>Linhong Deng, Institute of Biomedical Engineering and Health Sciences, Changzhou University, Changzhou, China, Tel: (86)0519-86330988; E-mail<EMAIL_ADDRESS> Introduction Hemostatic agent is critical for successful clinical outcomes in bone defects surgery. Conventionally, beeswax-based bone wax has been used as hemostatic agent. But it is challenged for its poor biodegradation and biocompatibility [1]. Potential alternative hemostatic candidates in orthopedic surgery include both natural polymers such as collagen, cellulose, gelatin etc. and inorganic materials such as zeolite, clays, and silica. However, these materials may have different problems for clinical practice. For example, as shown in a current spinal surgery study on rats, hemostatic polymers may cause undesirable complications such as inflammation and fibrosis [2]. On the other hand, the inorganic materials may be associated with non-biocompatible and/or nonbiodegradable nature, as well as hydration related thermal issue [3,4]. In principle, an ideal hemostatic agent for orthopedic applications should not only be able to stop bleeding but also promote bone healing. Recently, hydroxyapatite (HA, Ca 10 (PO 4 ) 6 (OH) 2 ) has attracted interests from researchers because of its hemostatic properties, besides its more well-known bone healing function [5,6]. Initially HA was combined with hemostatic polymers to improve their limited osteoconductivity. For example, Hoffmann fabricated a HA/starch/chitosan composite hemostatic material, proposed to be a substitute for bone wax or even as a bone filling material for orthopedic surgery applications [7]. After that, researcher noticed the presence of HA not only improve the composite's bone regeneration ability, but also enhance its blood clotting activity. Maruyama et al. combined HA with agarose gel and reported the presence of HA can greatly induce activation of blood coagulation and platelets aggregation compared to HA or agarose alone [8]. While Song et al. deposited HA to porous PLGA microspheres and the blood clotting activity was improved in the order of HA content increase [5]. Researchers have suggested blood clotting activity of HA is attributed to its high affinity with plasma proteins such as fibrinogen, and released Ca 2+ [8]. Unfortunately, few fundamental studies have been carried out to evaluate the blood clotting activity of HA in comparison to other potential bone repairing materials to highlight its significance as a hemostatic agent in orthopaedic applications. Meanwhile, it is also unclear whether the synthesis routes of HA have impacts on its blood clotting activity. Therefore, in current work we report experimental results of blood clotting activity comparisons of 1) calcium based inorganic bone repairing materials including HA, calcium silicate (CaSiO 3 ), calcium combined attapulgite (Ca-attapulgite, Ca-(Mg,Al) 2 Si 4 O 10 (OH)·4(H 2 O)), and calcium tripolyphosphate (Ca 5 (P 3 O 10 ) 2 ); 2) HA and hemostatic polymers such as chitosan; 3) HA synthesized following different approaches. Material and methods Chemicals were purchased from Aladdin China if not specified. HA was hydrothermally synthesized in an autoclave using Ca(OH) 2 and Na 2 HPO 4 as reported by our group [9]. Generally, an amount of 0.37 g of Ca(OH) 2 was mixed with 300 mL of distilled water to make a suspension. Then 0.71 g Na 2 HPO 4 was added to react with Ca(OH) 2 . The prepared liquid mixture was magnetically stirred for 15 min. The pH value of the liquid mixture was kept at 10 using 1M NaOH solution. The mixtures were hydrothermally treated in an autoclave for 4hours to obtain HA. CaSiO 3 was precipitated via the reaction of tetraethyl orthosilicate (TEOS) and Ca(NO 3 ) 2 . Briefly, 12 mL NH 3 .H 2 O was dissolved in 600 mL distilled H 2 O with stirring for 30 min. Then, 30 mL TEOS and 31.21g Ca(NO 3 ) 2 .4H 2 O were added with vigorous stirring for 3 hours. The products were collected by filtration and washed three times each with distilled H 2 O and ethanol. Ca-attapulgite was prepared using attapulgite purchased from Zijin Mining, China. The powders were treated by 24 hours acidification using 6M HCl followed by 24 hrs 1M CaCl 2 incubation with stirring. Meanwhile Ca 5 (P 3 O 10 ) 2 was formed by complexation of 1.11 g CaCl 2 and 0.123 g Na 5 P 3 O 10 (STPP) in 100 mL H 2 O with continued stirring for 30 min. All as prepared powders were characterized using X-ray diffraction (XRD, Rigaku) and transmission electron microscope (TEM, Zeiss). The blood clotting activity was in vitro measured as blood clotting index (BCI) [10]. Human blood in addition with the anticoagulant citrate dextrose (ACD) (9:1) was used for testing, referred as ACDwhole blood. This blood was kindly provided by Changzhou No.2 People's Hospital. In brief, 0.09 g of powder was used to contact with 0.27 mL blood sample (0.3mL ACD-whole blood by addition of 0.024 ml CaCl 2 (0.2 mol/L)) at 37°C for 10 min. The free blood was collected and diluted into 50 mL for spectrophotometric measurement at 542 nm. The absorbance of 0.25 mL ACD-whole blood in 50 mL deionized water at 542 nm was applied as a reference value. The BCI can be quantified by the following equation: Powders of chitosan, HA and a mixture of both (1:1) were used for BCI testing. Besides, considering HA can be combined with chitosan to fabricate biomimetic bone scaffold [11], comparison between porous chitosan scaffold and HA coated one was also carried out. 600 μL of 0.015 g/mL chitosan solution in well was freeze dried into porous scaffold, which was further incubated into 37°C 1.5x t-simulated body fluid (t-SBF) for 7days with solution replenished every 48 hrs to deposit HA coatings ( Table 1). The surface change of chitosan scaffold after SBF incubation was characterized using scanning electron microscope (SEM, Zeiss). The t-SBF is a Tris (C 4 H 11 NO 6 ) buffered SBF solution developed by Tas and Bhaduri, closely mimicking the composition of human blood plasma [12]. In present work, the ionic concentrations of t-SBF solution were intensified 1.5times to accelerate HA coating formation. BCI index and the swelling ability of scaffolds in phosphate buffer (PBS) were measured. The swelling ratio of the scaffold at a given time(t), Q t , can be calculated using equation below, where m 0 and m t are the weights of the dried and swollen scaffold, and Q t is calculated as grams of water per gram of scaffold. The third part was the study of clotting activity of HA synthesized following different approaches. Sodium hexametaphosphate (Na 6 P 6 O 18 , SHMP), were used to prepare mesoporous HA (HA-HMP) to show the increase of surface area can promote clotting [9]. On the other hand, precipitates (HA-1) from the solution of 11.1 g/L CaCl 2 and 1.56 g/L NaH 2 PO 4 .2H 2 O were studied to show whether increase of Ca/P can have significant influence on related blood clotting activity. The XRD and TEM characterizations of these powders were also carried out. Results and discussion The XRD patterns of as-prepared Ca containing inorganic salts are shown in Figure 1. All powders displayed the characteristics of expected phases. According to the XRD, the synthesized HA and Caattapulgite matched the profiles in Jade (PDF # 09-0432 and 20-0958) respectively. While the as-prepared CaSiO 3 and Ca 5 (P 3 O 10 ) 2 were mainly amorphous, like reported before [13,14]. The TEM results of these particles are presented in Figure 2. It was seen that the HA, CaSiO 3 , Ca-attapulgite and Ca 5 (P 3 O 10 ) 2 present rod-like, spherical, whisker- like and mesoporous morphology respectively. Among these materials, CaSiO 3 was commonly studied as an alternative to HA for bone repairing [15]. Additionally, it also showed ability to absorb proteins like HA [16]. Therefore, it is necessary to compare the hemostatic ability of HA and CaSiO 3 , thus indicting the reason choosing HA as the potential hemostatic agent instead of CaSiO 3 . Attapulgite was another silicate material highlighted with its absorption ability and bone repairing potential [17]. The incorporation of Ca 2+ to attapulgite was supposed to enhance its clotting activity. Meanwhile, the reason choosing Ca 5 (P 3 O 10 ) 2 was attributed to the report that polyphosphate can accelerate blood clotting [18] and its self-assembled porous structure [13]. Per the BCI results (Figure 3), among them HA had the best blood clotting activity. This phenomenon could be explained by the facts that HA has a high affinity with plasma proteins such as fibrinogen, and can release Ca 2+ to specifically activate prothrombin and coagulation factors to enhance blood clotting [8]. Therefore, HA is recommended as the hemostatic agent for bone defect applications from above 4 pickups. On the other hand, when compared to chitosan powder, HA showed better clotting activity (Figure 4). When chitosan was mixed with HA, its blood clotting activity was comparable to HA instead. This phenomenon could be caused by the combined effects of multiple clotting routes of chitosan and HA. Indeed, chitosan stimulated platelet and erythrocytes aggregation via its amino residue (positively charged surface) [19] and concentrated blood to accelerate clotting via its hydration behavior [20], showing completely different coagulation routes to HA. On the other hand, when HA was coated onto chitosan matrix, the clotting activity was not only depending on the combined effects of chitosan and HA, but also influenced by the amount of blood concentrated by porous scaffold. According to SEM after 7days SBF incubation, HA was successfully deposited to chitosan matrix ( Figure 5). Though HA limited the swelling of scaffold (Figure 6a), the BCI index difference between chitosan and HA coated was not significant ( Figure 6b). This observation was suggested to be caused by the increase of HA content (49 ± 5 wt.%) and matrix stiffness [21]. As reported by Qiu et al., increasing substrate stiffness led to increased platelet adhesion, spreading, and platelet activation [22]. In literature, depending on the phosphate source used as well as hydrothermal condition, the morphology of HA can be tailored [9,23]. It was reported Inorganic condensed phosphates have a high affinity to Ca 2+ ions to form complex in aqueous medium. Under hydrothermal condition, condensed phosphates could be hydrolyzed to release orthophosphate subsequently. Therefore, using P 6 O 18 6instead of PO 4 3-could result in mesoporous HA, thus changing its clotting activity. The HA-HMP was proved to be HA according to XRD (Figure 7a). And an irregularly shaped and mesoporous morphology was presented (Figure 7b). The increase of surface area enhanced the clotting activity in comparison to regular HA dense particles as expected (Figure 7c). On the other hand, the HA-1 with significant increase of Ca/P in reaction solution resulted in formation of phase impurity and a great increase of blood clotting activity. As seen in XRD, HA-1 displayed characteristics of both HA and brushite (CaHPO 4 .2H 2 O, PDF#09-0077) (Figure 8a). Consequently, in TEM nanoparticles showed both rod-like and plate-like morphologies (Figure 8b). In the followed BCI test, HA-1 showed much higher blood clotting activity than HA (Figure 8c). It was known fast precipitation of HA caused by strong ionic concentration can induce significant amounts of ions loaded to HA lattice structure [24]. Therefore, in HA-1 a quick release of Ca 2+ was expected once in contact with blood to stimulate coagulation cascade. After coagulation, both HA and brushite could induce bone regeneration. This phenomenon provided a possibility to load different ions to HA to help both blood clotting and bone formation. Indeed, different ions such as Mg 2+ , Zn 2+ , CO 3 2have been doped into HA to favor bone formation or even provide anti-bacterial property [25,26]. However, these ions also showed potential to enhance blood clotting in addition to Ca 2+ . For example, Mg 2+ was observed to enhance coagulant activity of factor IXa [27]; Zn 2+ was found to be an important cofactor in regulating platelet aggregation and coagulation [28]; while the presence of CO 3 2in HA could promote blood clotting and protein adsorption [29]. Conclusion In summary, we showed 1) HA is recommended as a potential agent for blood clotting and bone repairing alone or combined with biopolymers; 2) great surface area as well as high amount of active Ca 2+ can significantly improve the blood clotting activity of HA. It is wished present work can promote the development of HA based products to replace conventional bone wax.
3,098.2
2017-01-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Affiliated Fusion Conditional Random Field for Urban UAV Image Semantic Segmentation Unmanned aerial vehicles (UAV) have had significant progress in the last decade, which is applied to many relevant fields because of the progress of aerial image processing and the convenience to explore areas that men cannot reach. Still, as the basis of further applications such as object tracking and terrain classification, semantic image segmentation is one of the most difficult challenges in the field of computer vision. In this paper, we propose a method for urban UAV images semantic segmentation, which utilizes the geographical information of the region of interest in the form of a digital surface model (DSM). We introduce an Affiliated Fusion Conditional Random Field (AF-CRF), which combines the information of visual pictures and DSM, and a multi-scale strategy with attention to improve the segmenting results. The experiments show that the proposed structure performs better than state-of-the-art networks in multiple metrics. Introduction Conditional Random Field (CRF) is a structured prediction model based on graph structures, which is a Markov Random Field (MRF) of random variables Y for given random variables X [1]. CRF has been widely used in various structured prediction tasks, such as estimating scores in Go games [2], separating specific genes from DNA [3], word segmentation in the field of natural language processing [4,5], and image segmentation for computer vision [6,7], etc. Since CRF can utilize lower-level contextual information at the structural level, it is well suited for these structured prediction tasks. Semantic image segmentation, a pixel-level recognition task labeling the object category to each pixel in the image, is a typical structured prediction task. Semantic image segmentation is the basis of scene-parsing tasks, which have great potential in further applications such as automatic cruise, the landing of drones, and autonomous vehicles. The early application of CRF in the field of semantic image segmentation mainly utilizes a second-order CRF to model an image [8], which reads the associated information of four neighborhoods for each pixel, and uses two types of potential functions to model the conditional probability. The used potential functions are a unary potential function, which is only related to the current pixel feature, and a pairwise potential function, which is associated with pixels within the neighborhood. The precise inference of such methods often requires a large computational cost. Therefore, the maximum posterior estimation of the approximated conditional probability is usually obtained by sampling methods or variational methods. The four-neighbor connectivity CRF has achieved a degree of success in certain fields, yet it also has inherent drawbacks in the model structure: four-neighbor communications can only consider the neighboring dependence of adjacent pixels, but not the spatial position. It cannot establish a long-range dependence on two pixels that are far away from each other in spatial position so that satisfactory segmentation results cannot be obtained when there is a large cross area, occlusion, repetition, or deformation in the category of the target. In order to solve this problem, a series of CRFs using high-order potential functions are proposed to improve their performance [9,10], which solve the above problems in certain aspects, but the computational complexity of the model is also significantly increased simultaneously. To solve the above problems, a fully connected CRF with Gaussian kernels in the feature space is proposed [11], which has achieved a good compromise between speed and precision. However, CRFs that use specific functions are still not robust enough, resulting in complex textures and reduced performance, especially with changes in the proportion of long-range images such as those taken by drones or satellites. In order to better consider the spatial information of long-range pixels in order to reduce error classifications in the image parts with complex texture, we introduce geographical information through a digital surface model (DSM). To get a further improvement of segmentation accuracy, this paper proposes a fully connected Affiliated Fusion CRF (AF-CRF) with DSM, which (1) adopts a fully connected CRF structure to take long-range dependence into account; (2) utilizes the corresponding DSM to comprehend the height information of the region of interest (ROI); and (3) takes multi-scale pyramids for both images and DSM for global feature understanding. Compared to a large amount of manually labeled pictures, DSM can be generated automatically by the pictures taken above the ROI. The extra information of elevation of the interested region is believed to improve the performance of the classifier. Preliminary CRF is a discriminant undirected graph model. Compared with the corresponding generative model, it directly models the conditional probability. In this section, we simply introduce the theory of CRF and its general models. Conditional Random Field The idea of CRF is to model the conditional probabilities of multiple variables with given observations. Let x = {x 1 , x 2 , · · · , x n } be the observation sequence and y = y 1 , y 2 , · · · , y n be the corresponding label sequence. The purpose of CRF is to model the conditional probability P(y x). y is a structural variable so that some relationship exists between its components. Let G = {V, E} denote undirected graphs with one-to-one correspondence between the observation sequence x and its label y. y v denotes an annotation variable corresponding to node v, and n(v) denotes adjacent nodes of v. If each variable y v of the graph satisfies the Markov property, then there is where V\{v} denotes all nodes except v. General CRF Model From the view of Equation (1), a CRF adds an observation sequence to a MRF, i.e., a CRF is a MRF given the observations. Similar to MRF, CRF defines the conditional probability distribution with potential functions and the cliques on the graph. According to Hammersley-Clifford theory, the probability distribution between multiple variables can be decomposed into a product of multiple factors based on the cliques in the graph, in which each factor is related only to a single maximum clique of the graph [12]. Specifically, for an observation sequence x = {x 1 , x 2 , · · · , x n } and its corresponding label sequence y = y 1 , y 2 , · · · , y n , let C denote the set of maximum cliques and let x C , y C denote the variables corresponding to a single maximum clique C ∈ C; then, the conditional distribution P (y|x) can be written as where Z = C∈C Ψ C y C , x C is the partition function that normalizes C∈C Ψ C y C , x C . In particular, a general CRF model is applied to the problem of semantic image segmentation. Consider a random field X defined on a set of variables x = {x 1 , x 2 , · · · , x n } with label y ∈ L = {l 1 , l 2 . . . , l k } and another random field I defined on another set of variables {I 1 , I 2 , . . . , I k }, in which I denotes an input image of size N and X denotes the pixel-level annotation of I. Then, the conditional probability distribution P(X|I) can be expressed as where G = (V, E) is the graph defined on X. C G is the set of all the cliques in G and every clique c in C G introduces a potential function φ c [13]. The Gibbs energy of the annotation x ∈ L N is Then, the MAP estimation of the CRF is In a fully connected CRF, let G be the complete graph of X, in which C G is the set of both unary cliques and pairwise cliques; then, the corresponding Gibbs energy is The unary potential φ u ( x i |I) is only related to the feature of a single pixel itself, while the pairwise potential φ p x i , x j I is related to the similarities and differences between every two pixels, which can be expressed as the linear combination of the Gaussian kernels defined in the feature space where µ x i , x j is the label compatibility function to measure the probability that two labels appear at the same time. For instance, the probability of annotation {ship} and {waters} should be larger than one of {plane} and {waters}, intuitively. Affiliated Fusion Conditional Random Fields In this section, we describe the proposed model applying to semantically segment urban unmanned aerial vehicles (UAV) images. The first is a brief introduction to the theory of multi-scale and attention analysis. Afterwards, the Affiliated Fusion CRF is proposed. Multi-Scale Analysis Multi-scale analysis is a common method of digital image processing, which involves the representation and analysis of images at multiple resolutions. The advantages of this approach are obvious, in which features that cannot be detected at one resolution are often easier to be detected at another resolution. This section uses the classic image Gaussian pyramid as a multi-scale metric. The image pyramid is a series of images of the pyramid structure obtained by the original image after multiple times of downsampling operation with the same ratio. The original image size and resolution of the bottom layer are the highest, and the resolution of the upper layer is reduced. The images in each layer have different sizes and resolutions. A complete image pyramid has n + 1 layers of image. The use of different scale representations of an image can be thought of as adding another dimension to the image. In addition to the conventional positional dimension (x,y), a dimension for depicting the current number of pyramid layers is added. This structure is also called the scale space. According to the sampling theorem, only a fine structure that is sampled with less than 1/4 wavelength can be eliminated by a smoothing filter so that a correct downsampled image can be obtained. From a scale space perspective, this means that reducing the size of the image needs to be done in synchrony with the proper smoothing operation. The smoothing operation of the image can be performed by various low-pass filters, and the image pyramid obtained by Gaussian smoothing filter can be expressed as AF-CRF The unary potential φ u ( x i |I) is only related to the feature of a single pixel, while the pairwise potential φ p x i , x j I is related to the similarities and differences between every two pixels. Therefore, to take advantage of the contextual information in the scale space, we optimize φ p x i , x j I in the proposed AF-CRF, which can be expressed as below with the parameters of the model, θ, and we hide the image I from Equation (7) for convenience: where k (m) (·) refers to Gaussian kernel functions and m is the number of them. k (m) (·) can be written as where the vectors f i and f j are the feature vectors of pixel i and pixel j in the feature space, respectively. w (m) is the weight of each kernel, and µ (m) x i , x j θ is the label compatibility function. Each Gaussian kernel can be defined by the inverse of a covariance matrix Σ −1 (m) . The Gaussian kernels of the pairwise potential of the CRF with the DSM mentioned above are constructed with color vector I, position vector p, and height h. The addition of h brings the spatial sensitivity in the pairwise potential of the model, which is defined in the Gaussian kernel as where the first term considers that pixels with similar positions, similar colors and small height difference have higher probability of the same label categories. The second one considers the smoothing terms, which only points out that pixels with similar positions have a higher probability of the same label categories. of 15 As for the label compatibility function µ (m) x i , x j θ that appeared in Equation (9), a simple and practical one is the Potts model, which can be written as where I l i , l j is a 0-1 indicator, which is Although the segmentation result of utilizing the Potts model in CRF is generally acceptable, its disadvantage can be seen from its definition: the Potts model penalizes all inconsistent labels equally. For instance, the penalty of label{ship} and {waters} is equal to that of {plane} and {waters}, which is obviously not intuitive. In order to improve this simple Potts model, we can learn a symmetric adaptive category associated label compatibility function, which is Therefore, a more reasonable penalty is imposed on the inconsistency of the annotation in the semantic segmentation result of the UAV images. Taking attention into account with inputting the images in S scales to the model and upsampling the output in all scales to the original size, the attention feature f s i,c is the interest value of the ith pixel belongs to class c on the scale s with the pairwise potential of the CRF, where s ∈ {1, 2, · · · , S}. Let g i,c denote the weighted sum of position i and class c of all the attention features of the representation, i.e., h t i is the output scores of each class (ranges from 1 to C) in each position (ranges from 1 to N). δ s i reflects the importance of position i in scale s, which is shared in all channels. The attention term describes the relationship among the different scales of the CRF's input and combines the differences of focused performance in the different scales with an adaptive weight. With the attention term, the model considers multi-scale information in the inference stage to get a more reasonable prediction. Assume that the input scales are s ∈ {1, 2, · · · , S} and combine the attention term proposed in the previous section. Considering the pairwise potential as an auxiliary decision condition in the inference stage, the Gibbs energy of the proposed model can be rewritten as where δ s i = e h s i / S t=1 e h t i as shown in Equation (15), and φ p,s (·) denotes the pairwise potential of the scale s. The new E * (x) considers the features of color, position, and height, as well as scale, which theoretically helps improve the robustness of the overall model. The workflow is illustrated in Figure 1. Inference and Learning of the Model In a fully connected CRF, the bottleneck of computing speed lies in the message-passing step. In order to achieve the fast operation of the proposed AF-CRF, this paper adopt the fast-solving algorithm with the mean field approximation proposed in [11] instead of the precise inference. The model inference is transformed into a Gaussian filtering in the feature space to improve the operation speed and achieve the acceptable accuracy. This section introduces the inference of conditional random fields and the learning methods of their parameters. Inference According to Equation (16), the exact probability distribution can be written as In order to achieve the fast operation of the proposed AF-CRF, the mean field approximation method is used in this section; that is, the probability distribution of the minimum Kullback-Leibler divergence (KL divergence) between the calculation and the accurate probability distribution is approximated instead of calculating the exact probability distribution directly. The approximate probability distribution can be expressed as the product of a series of marginal probability distributions as where ( ) i i Q X is the marginal distribution of each variable in the model. Then, the KL divergence of ( ) Q X and ( ) P X can be expressed as where denotes the expectation in the distribution Q. Inference and Learning of the Model In a fully connected CRF, the bottleneck of computing speed lies in the message-passing step. In order to achieve the fast operation of the proposed AF-CRF, this paper adopt the fast-solving algorithm with the mean field approximation proposed in [11] instead of the precise inference. The model inference is transformed into a Gaussian filtering in the feature space to improve the operation speed and achieve the acceptable accuracy. This section introduces the inference of conditional random fields and the learning methods of their parameters. Inference According to Equation (16), the exact probability distribution can be written as In order to achieve the fast operation of the proposed AF-CRF, the mean field approximation method is used in this section; that is, the probability distribution of the minimum Kullback-Leibler divergence (KL divergence) between the calculation and the accurate probability distribution is approximated instead of calculating the exact probability distribution directly. The approximate probability distribution can be expressed as the product of a series of marginal probability distributions as where Q i (X i ) is the marginal distribution of each variable in the model. Then, the KL divergence of Q(X) and P(X) can be expressed as where E U∼Q denotes the expectation in the distribution Q. Q(x) can be derived as a product of independent marginal distribution, which is Q(X) = i Q i (X i ). In this paper, the Q(X) of each scale is inferenced separately to focus on the specific scale, which is denoted as Q s (X). In order to minimize the KL divergence while ensuring that Q s (X) and Q s,i (X i ) consist of the proper probability density (i.e., Q s (X) = 1 and Q s,i (X i ) = 1), the following iterative updating formula is used: From the view of signal processing, kernel functions can simplify inner product operation in a mapping space. Equation (20) can be expressed as the convolution of Gaussian kernels in feature space, which is Q where . The convolution is represented as a low-pass filter. According to the sampling theorem, the function can be reconstructed from a set of samples whose spacing is proportional to the standard deviation of the filter [11]. Therefore, we can perform convolution by downsampling Q s (l), using G (Σ −1 ) (m) for the convolution, and upsampling the results at the feature points [14]. Truncated Gaussian approximation is a common approximation for Gaussian kernels, in which all values exceeding two standard deviations are set to zero. Since the spacing of samples is proportional to the standard deviation, the truncated Gaussian kernels only support a constant number of sample points. Therefore, by aggregating values from only a constant number of adjacent samples, the convolution can be approximately calculated at each sample. Learning The data-driven parameters that need to be learned are the symmetric label compatibility functions mentioned in previous sections. The symmetry of µ is beneficial to realize the learning algorithm on the one hand and also to the intuition on the other hand: the punishment of the label {waters} for {ship} should be equal to the penalty of the label {ship} for {waters}. In order to effectively learn this symmetric label compatibility function µ, this paper uses the maximum likelihood estimation (MLE) criterion. The goal of MLE is to find a set of parameters that maximize the log-likelihood of the model given the training image I and its annotation result T (n) , which is where I (n) is the training image and T (n) is its annotation. The Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm is adopted to learn the label compatibility functions to maximize the log-likelihood l µ : T (n) , I (n) . L-BFGS requires gradients to be calculated, which is difficult to estimate accurately because of the calculation of the gradient of the normalization factor Z. Therefore, the mean field approximation described in the previous section is used to estimate the gradient of Z. That is, a simple approximation of the gradient of each training image, which is where T Experiments and Analysis In this section, we first introduce the dataset we use and the metric utilized to test the model. Afterwards, some experiments are conducted to test the proposed AF-CRF. The experiments demonstrate that the proposed method increases both the global accuracy (GA) and the Intersection over Union (IoU) and obtains a state-of-the-art result. Dataset The UAV urban images are taken by DJI Phantom 3 Standard at a height of 40 m, and the DSM is generated by PhotoScan [15] at a resolution of 2560 × 1536. The image for experiments is resized to the same resolution so that the DSM is manually adjusted to the image with only translation and rotation. Then, we cut the original image and DSM randomly to 160 pieces of 256 × 256 slices (60 pieces in order without overlapping and 100 pieces at random with overlapping) to train the networks. The image is labeled into five categories: background (not counted in the result), bridge (cls1), road (cls2), sidewalk (cls3), and vegetation (cls4). We randomly select 140 pairs to train and 20 pairs to validate. The origin image, DSM, and the cut slices are shown in Figures 2 and 3, respectively. Sensors 2020, 20, x FOR PEER REVIEW 8 of 15 In this section, we first introduce the dataset we use and the metric utilized to test the model. Afterwards, some experiments are conducted to test the proposed AF-CRF. The experiments demonstrate that the proposed method increases both the global accuracy (GA) and the Intersection over Union (IoU) and obtains a state-of-the-art result. Dataset The UAV urban images are taken by DJI Phantom 3 Standard at a height of 40 m, and the DSM is generated by PhotoScan [15] at a resolution of 2560 × 1536. The image for experiments is resized to the same resolution so that the DSM is manually adjusted to the image with only translation and rotation. Then, we cut the original image and DSM randomly to 160 pieces of 256 × 256 slices (60 pieces in order without overlapping and 100 pieces at random with overlapping) to train the networks. The image is labeled into five categories: background (not counted in the result), bridge (cls1), road (cls2), sidewalk (cls3), and vegetation (cls4). We randomly select 140 pairs to train and 20 pairs to validate. The origin image, DSM, and the cut slices are shown in Figures 2 and 3, respectively. Implement Details The model is trained with L-BFGS as described above and the loss is per-pixel softmax, which is a multinomial cross-entropy loss in terms of the predicted label x and the ground truth y, which is Here, cls x is the predicted score of the ground truth class y. The unary potential adopts the output of some previous works for semantic image segmentation, such as PSPNet-ds-ss [16], and sets it as a priori probability that the output is correct, In this section, we first introduce the dataset we use and the metric utilized to test the model. Afterwards, some experiments are conducted to test the proposed AF-CRF. The experiments demonstrate that the proposed method increases both the global accuracy (GA) and the Intersection over Union (IoU) and obtains a state-of-the-art result. Dataset The UAV urban images are taken by DJI Phantom 3 Standard at a height of 40 m, and the DSM is generated by PhotoScan [15] at a resolution of 2560 × 1536. The image for experiments is resized to the same resolution so that the DSM is manually adjusted to the image with only translation and rotation. Then, we cut the original image and DSM randomly to 160 pieces of 256 × 256 slices (60 pieces in order without overlapping and 100 pieces at random with overlapping) to train the networks. The image is labeled into five categories: background (not counted in the result), bridge (cls1), road (cls2), sidewalk (cls3), and vegetation (cls4). We randomly select 140 pairs to train and 20 pairs to validate. The origin image, DSM, and the cut slices are shown in Figures 2 and 3, respectively. Implement Details The model is trained with L-BFGS as described above and the loss is per-pixel softmax, which is a multinomial cross-entropy loss in terms of the predicted label x and the ground truth y, which is Here, cls x is the predicted score of the ground truth class y. The unary potential adopts the output of some previous works for semantic image segmentation, such as PSPNet-ds-ss [16], and sets it as a priori probability that the output is correct, Implement Details The model is trained with L-BFGS as described above and the loss is per-pixel softmax, which is a multinomial cross-entropy loss in terms of the predicted label x and the ground truth y, which is Here, x cls is the predicted score of the ground truth class y. The unary potential adopts the output of some previous works for semantic image segmentation, such as PSPNet-ds-ss [16], and sets it as a priori probability that the output is correct, which in this paper is set to 0.5 with a negative logarithm operation. The output score h t i in Equation (15) adopts the output of the last layer of PSPNet-ds-ss. The label compatibility function is initialized with the Potts model, which performs an identity matrix in the first training stage. Results and Discussion To get a quantitative evaluation result, we adopt global accuracy (GA) and intersection over union (IoU) as metrics, which are where t i is the total number of pixels of class i and the subscript cls means the accuracy within the specific class. k is the number of classes and n ij is the number of pixels that belong to class I and were classified to class j. GA represents the performance of the training, while IoU penalizes the false positive classification to demonstrate the performance in semantic segmentation. To get a general evaluation, mean intersection over union (mIoU) is also adopted, which is Aiming at further applications of semantic segmentation, we also adopted a metric of confidence for the model's outputs, which indicates the probability of the final output category. The confidence of the outputs matters when decisions are made by multi-sensor fusion. Sensors 2020, 20, x FOR PEER REVIEW 9 of 15 which in this paper is set to 0.5 with a negative logarithm operation. The output score t i h in Equation (15) adopts the output of the last layer of PSPNet-ds-ss. The label compatibility function is initialized with the Potts model, which performs an identity matrix in the first training stage. Results and Discussion To get a quantitative evaluation result, we adopt global accuracy (GA) and intersection over union (IoU) as metrics, which are where ti is the total number of pixels of class i and the subscript cls means the accuracy within the specific class. k is the number of classes and nij is the number of pixels that belong to class I and were classified to class j. GA represents the performance of the training, while IoU penalizes the false positive classification to demonstrate the performance in semantic segmentation. To get a general evaluation, mean intersection over union (mIoU) is also adopted, which is Aiming at further applications of semantic segmentation, we also adopted a metric of confidence for the model's outputs, which indicates the probability of the final output category. The confidence of the outputs matters when decisions are made by multi-sensor fusion. According to Equations (12) and (16) [11], this paper also set α β θ ,θ = 13 here. With regard to the height weight parameter γ θ , when γ θ = α θ = β θ = 13 by grid searching, the model has the best performance, as shown in Figure 4. 6 are the results of eight iterations of the original CRF model and the convergence of KL divergence of the three conditional random fields, respectively, where D-CRF refers to a Dual-CRF (images and DSM) without the multi-scale strategy. As demonstrated, the KL divergence of the models is generally convergent within five iterations; hence, in subsequent experiments, the number of iterations was set to five. In addition, Figure 6 also reveals that the proposed AF-CRF has the fastest convergence speed among the three kinds of CRF. Sensors 2020, 20, x FOR PEER REVIEW 10 of 15 Figures 5 and 6 are the results of eight iterations of the original CRF model and the convergence of KL divergence of the three conditional random fields, respectively, where D-CRF refers to a Dual-CRF (images and DSM) without the multi-scale strategy. As demonstrated, the KL divergence of the models is generally convergent within five iterations; hence, in subsequent experiments, the number of iterations was set to five. In addition, Figure 6 also reveals that the proposed AF-CRF has the fastest convergence speed among the three kinds of CRF. iter = 1 iter = 2 iter = 3 iter = 4 iter = 5 iter = 6 iter = 7 iter = 8 Figure 5. Outputs of the origin CRF model in every inference iteration, which is nearly converged within five iterations. Figure 7 is the pyramid structure of image-DSM-output. The higher the pyramid layers, the simpler the original image and the features of the DSM, and the more concentrated the attention of the output results. The output of the three scales is shown in Figure 8. When the scale factor is 1/8, the attention range is reduced to the extreme, and the output of the whole image is of the same category. On the one hand, it reflects the characteristics of attention in different scales. On the other hand, it also indicates that in practicality, the pyramid scale should not be too large; otherwise, it will lose the relevant features of the region of interest. In subsequent experiments, the number of pyramid scales was only two, i.e., the scale of 1/2 and the scale of 1/4. Figure 7 is the pyramid structure of image-DSM-output. The higher the pyramid layers, the simpler the original image and the features of the DSM, and the more concentrated the attention of the output results. The output of the three scales is shown in Figure 8. When the scale factor is 1/8, the attention range is reduced to the extreme, and the output of the whole image is of the same category. On the one hand, it reflects the characteristics of attention in different scales. On the other hand, it also indicates that in practicality, the pyramid scale should not be too large; otherwise, it will lose the relevant features of the region of interest. In subsequent experiments, the number of pyramid scales was only two, i.e., the scale of 1/2 and the scale of 1/4. The overall performance of the proposed AF-CRF is shown in Figure 9. Compared with the output of other models, the output of AF-CRF is the closest to the ground truth, while carefully retaining the output characteristics of the Fully Convolutional Network (FCN) classifier. A quantitative evaluation is shown in Table 1. Since the evaluation of PSPNet-ds-ss on several quantitative evaluations has reached a fairly high level, the performance of AF-CRF has only slightly improved. In order to evaluate the performance of AF-CRF better and fairly, a W-5 (Worst-5) index is proposed, which is the improvement of the five worst results of data output after the model, as shown in Table 2. The proposed AF-CRF has significantly improved the disadvantage of the former classifier. The overall performance of the proposed AF-CRF is shown in Figure 9. Compared with the output of other models, the output of AF-CRF is the closest to the ground truth, while carefully retaining the output characteristics of the Fully Convolutional Network (FCN) classifier. A quantitative evaluation is shown in Table 1. Since the evaluation of PSPNet-ds-ss on several quantitative evaluations has reached a fairly high level, the performance of AF-CRF has only slightly improved. In order to evaluate the performance of AF-CRF better and fairly, a W-5 (Worst-5) index is proposed, which is the improvement of the five worst results of data output after the model, as shown in Table 2. The proposed AF-CRF has significantly improved the disadvantage of the former classifier. The overall performance of the proposed AF-CRF is shown in Figure 9. Compared with the output of other models, the output of AF-CRF is the closest to the ground truth, while carefully retaining the output characteristics of the Fully Convolutional Network (FCN) classifier. A quantitative evaluation is shown in Table 1. Since the evaluation of PSPNet-ds-ss on several quantitative evaluations has reached a fairly high level, the performance of AF-CRF has only slightly improved. In order to evaluate the performance of AF-CRF better and fairly, a W-5 (Worst-5) index is proposed, which is the improvement of the five worst results of data output after the model, as shown in Table 2. The proposed AF-CRF has significantly improved the disadvantage of the former classifier. The overall performance of the proposed AF-CRF is shown in Figure 9. Compared with the output of other models, the output of AF-CRF is the closest to the ground truth, while carefully retaining the output characteristics of the Fully Convolutional Network (FCN) classifier. A quantitative evaluation is shown in Table 1. Since the evaluation of PSPNet-ds-ss on several quantitative evaluations has reached a fairly high level, the performance of AF-CRF has only slightly improved. In order to evaluate the performance of AF-CRF better and fairly, a W-5 (Worst-5) index is proposed, which is the improvement of the five worst results of data output after the model, as shown in Table 2. The proposed AF-CRF has significantly improved the disadvantage of the former classifier. The values of the symmetric label compatibility function of the learned AF-CRF are shown in Figure 10, where the labels are {0: void}, {1: bridge}, {2: road}, {3: sidewalk}, and {4: vegetation}. From Figure 9, the result of learning is that the compatibility between label bridges and roads, as well as roads and sidewalks, is relatively large, while it restrains the label compatibility of roads and pavements and roads and vegetation, which is consistent with the dataset. On the other hand, the imbalance in datasets also affects the learning of label compatibility function, making it more focused on the category of {1: bridge} with the largest number. Model GAT (%) GAV (%) IoUcls1 (%) IoU cls2 (%) IoU cls3 (%) IoU cls4 (%) mIoU (%) DeepLabv3 + -ds-ss [17] 94 The values of the symmetric label compatibility function of the learned AF-CRF are shown in Figure 10, where the labels are {0: void}, {1: bridge}, {2: road}, {3: sidewalk}, and {4: vegetation}. From Figure 9, the result of learning is that the compatibility between label bridges and roads, as well as roads and sidewalks, is relatively large, while it restrains the label compatibility of roads and pavements and roads and vegetation, which is consistent with the dataset. On the other hand, the imbalance in datasets also affects the learning of label compatibility function, making it more focused on the category of {1: bridge} with the largest number. We also test the improvements to some other networks, as shown in Table 3. FCN8s, DeepLab, and U-Net are all classical networks in the field of semantics image segmentation. Their primary output results have been significantly improved after AF-CRF post-processing. While focusing on the global accuracy and IoU, this paper also considers the confidence of the outputs of the model. As shown in Figure 11, the confidence of the model output is expressed as a thermogram of the probability of the classification results for each pixel in the image. It can be seen from the figure that the output of AF-CRF not only improves the accuracy of PSPNet-ds-ss, but also improves the output confidence of the former FCN classifier, especially in the edge-independent region. Due to the dense label-label connection of the AF-CRF model, the confidence of the outputs We also test the improvements to some other networks, as shown in Table 3. FCN8s, DeepLab, and U-Net are all classical networks in the field of semantics image segmentation. Their primary output results have been significantly improved after AF-CRF post-processing. While focusing on the global accuracy and IoU, this paper also considers the confidence of the outputs of the model. As shown in Figure 11, the confidence of the model output is expressed as a thermogram of the probability of the classification results for each pixel in the image. It can be seen from the figure that the output of AF-CRF not only improves the accuracy of PSPNet-ds-ss, but also improves the output confidence of the former FCN classifier, especially in the edge-independent region. Due to the dense label-label connection of the AF-CRF model, the confidence of the outputs in the continuous area of large labeling (as shown in the upper and lower left of Figure 11) has been significantly improved. Confidence has a positive impact on the subsequent application of semantic UAV image segmentation, which to some extent affects the correct decision of subsequent multi-sensor fusion. The quantitative evaluation of the confidence index is shown in Table 4. AF-CRF is better than the PSPNet-ds-ss model in both the average confidence and minimum confidence of outputs. More examples of AF-CRF output are shown in Figure 12. Sensors 2020, 20, x FOR PEER REVIEW 13 of 15 in the continuous area of large labeling (as shown in the upper and lower left of Figure 11) has been significantly improved. Confidence has a positive impact on the subsequent application of semantic UAV image segmentation, which to some extent affects the correct decision of subsequent multisensor fusion. The quantitative evaluation of the confidence index is shown in Table 4. AF-CRF is better than the PSPNet-ds-ss model in both the average confidence and minimum confidence of outputs. More examples of AF-CRF output are shown in Figure 12. The proposed AF-CRF also has some drawbacks. As shown in the 3rd column of Figure 12, the wrong classification at the top of the image is difficult to correct. Due to the dense connection between annotations and the influence of the former classifier as a unary potential, AF-CRF inevitably neglects the correct categories that are not appearing in the output of the former one. In addition, as seen in the first column of Figure 12, after AF-CRF, the correct part of the output in the former classifier appears in the wrong classification spots, which is the result of an over-consideration of long-range dependence, and the categories in the location where the errors occur are all so flat that the corresponding DSM fails to provide additional information as well. in the continuous area of large labeling (as shown in the upper and lower left of Figure 11) has been significantly improved. Confidence has a positive impact on the subsequent application of semantic UAV image segmentation, which to some extent affects the correct decision of subsequent multisensor fusion. The quantitative evaluation of the confidence index is shown in Table 4. AF-CRF is better than the PSPNet-ds-ss model in both the average confidence and minimum confidence of outputs. More examples of AF-CRF output are shown in Figure 12. The proposed AF-CRF also has some drawbacks. As shown in the 3rd column of Figure 12, the wrong classification at the top of the image is difficult to correct. Due to the dense connection between annotations and the influence of the former classifier as a unary potential, AF-CRF inevitably neglects the correct categories that are not appearing in the output of the former one. In addition, as seen in the first column of Figure 12, after AF-CRF, the correct part of the output in the former classifier appears in the wrong classification spots, which is the result of an over-consideration of long-range dependence, and the categories in the location where the errors occur are all so flat that the corresponding DSM fails to provide additional information as well. The proposed AF-CRF also has some drawbacks. As shown in the 3rd column of Figure 12, the wrong classification at the top of the image is difficult to correct. Due to the dense connection between annotations and the influence of the former classifier as a unary potential, AF-CRF inevitably neglects the correct categories that are not appearing in the output of the former one. In addition, as seen in the first column of Figure 12, after AF-CRF, the correct part of the output in the former classifier appears in the wrong classification spots, which is the result of an over-consideration of long-range dependence, and the categories in the location where the errors occur are all so flat that the corresponding DSM fails to provide additional information as well. Conclusions In this paper, we propose a novel Affiliated Fusion Conditional Random Field (AF-CRF) to semantically segment urban UAV images. The model adopts DSM as an extra geographical information to improve the segmentation result along with a multi-scale analysis with attention. The experiments show that our proposed model improves not only the accuracy but also the confidence of output results, which has considerable benefit to the further applications of urban UAV image semantic segmentation. The limitation of our method is that the DSM of regions of interest must be generated in advance, yet it will be easier to get as the geographic information system develops. The future work will focus on a more specified form of the unary potential.
9,627.8
2020-02-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Measurement of the cross-section for electroweak production of dijets in association with a Z boson in pp collisions at s=13TeV with the ATLAS detector The cross-section for the production of two jets in association with a leptonically decaying Z boson ( Zj j ) is measured in proton–proton collisions at a centre-of-mass energy of 13 TeV, using data recorded with the ATLAS detector at the Large Hadron Collider, corresponding to an integrated luminosity of 3.2 fb − 1 . The electroweak Zj j cross-section is extracted in a fiducial region chosen to enhance the electroweak contribution relative to the dominant Drell–Yan Zj j process, which is constrained using a data-driven approach. The measured fiducial electroweak cross-section is σ Zjj EW = 119 ± 16 ( stat .) ± 20 ( syst .) ± 2 ( lumi .) fb for dijet invariant mass greater than 250 GeV, and 34 . 2 ± 5 . 8 ( stat .) ± 5 . 5 ( syst .) ± 0 . 7 ( lumi .) fb for dijet invariant mass greater than 1 TeV. Standard Model predictions are in agreement with the measurements. The inclusive Zj j cross-section is also measured in six different fiducial regions with varying contributions from electroweak and Drell–Yan Zj j production. Introduction At the Large Hadron Collider (LHC) events containing a Z boson and at least two jets ( Z j j) are produced predominantly via initialstate QCD radiation from the incoming partons in the Drell-Yan process (QCD-Z j j), as shown in Fig. 1(a). In contrast, the production of Z j j events via t-channel electroweak gauge boson exchange (EW-Z j j events), including the vector-boson fusion (VBF) process shown in Fig. 1(b), is a much rarer process. Such VBF processes for vector-boson production are of great interest as a 'standard candle' for other VBF processes at the LHC: e.g., the production of Higgs bosons or the search for weakly interacting particles beyond the Standard Model. The kinematic properties of Z j j events allow some discrimination between the QCD and EW production mechanisms. The emission of a virtual W boson from the quark in EW-Z j j events results in the presence of two high-energy jets, with moderate transverse momentum (p T ), separated by a large interval in rapidity ( y) 1 and E-mail address<EMAIL_ADDRESS>1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point in the centre of the detector and the z-axis along the beam pipe. In the transverse plane, the x-axis points from the interaction point to the centre of the LHC ring, the y-axis points upward, and φ is the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). The rapidity is defined as y = 0.5 ln[(E + p z )/(E − p z )], where E and p z are the energy and longitudinal momentum respectively. An angular separa-therefore with large dijet mass (m jj ) that characterises the EW-Z j j signal. A consequence of the exchange of a vector boson in Fig. 1(b) is that there is no colour connection between the hadronic systems produced by the break-up of the two incoming protons. As a result, EW-Z j j events are less likely to contain additional hadronic activity in the rapidity interval between the two high-p T jets than corresponding QCD-Z j j events. The first studies of EW-Z j j production were performed [1] in pp collisions at a centre-of-mass energy ( √ s) of 7 TeV by the CMS Collaboration, where the background-only hypothesis was rejected at the 2.6σ level. The first observation of the EW-Z j j process was performed by the ATLAS Collaboration at a centre-of-mass energy ( √ s) of 8 TeV [2]. The cross-section measurement is in agreement with predictions from the Powheg-box event generator [3][4][5] and allowed limits to be placed on anomalous triple gauge couplings. The CMS Collaboration has also observed and measured [6] the cross-section for EW-Z j j production at 8 TeV. This Letter presents measurements of the cross-section for EW-Z j j production and inclusive Z j j production at high dijet invariant mass in pp collisions at √ s = 13 TeV using data corresponding to an integrated luminosity of 3.2 fb −1 collected by the ATLAS detector at the LHC. These measurements allow the dependence of the cross-section on to be studied. The increased √ s allows exploration of higher dijet masses, where the EW-Z j j contribution to the total Z j j rate becomes more pronounced. ATLAS detector The ATLAS detector is described in detail in Refs. [7,8]. It consists of an inner detector for tracking, surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroidal magnet systems. The inner detector is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. Electromagnetic calorimetry is provided by barrel and end-cap lead/liquid-argon (LAr) calorimeters in the region |η| < 3.2. Within |η| < 2.47 the calorimeter is finely segmented in the lateral direction of the showers, allowing measurement of the energy and position of electrons, and providing electron identification in conjunction with the inner detector. Hadronic calorimetry is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two hadronic end-cap calorimeters. A copper/LAr hadronic calorimeter covers the 1.5 < |η| < 3.2 region, and a forward copper/tungsten/LAr calorimeter with electromagnetic-shower identification capabilities covers the 3.1 < |η| < 4.9 region. The muon spectrometer comprises separate trigger and highprecision tracking chambers. The tracking chambers cover the region |η| < 2.7 with three layers of monitored drift tubes, complemented by cathode strip chambers in part of the forward region, where the hit rate is highest. The muon trigger system covers the range |η| < 2.4 with resistive plate chambers in the barrel region, and thin gap chambers in the end-cap regions. A two-level trigger system is used to select events of interest [9]. The Level-1 trigger is implemented in hardware and uses a subset of the detector information to reduce the event rate to around 100 kHz. This is followed by the software-based high-level trigger system which reduces the event rate to about 1 kHz. Monte Carlo samples The production of EW-Z j j events was simulated at nextto-leading-order (NLO) accuracy in perturbative QCD using the Powheg-box v1 Monte Carlo (MC) event generator [4,5,10] and, alternatively, at leading-order (LO) accuracy in perturbative QCD using the Sherpa 2.2.0 event generator [11]. For modelling of the parton shower, fragmentation, hadronisation and underlying event (UEPS), Powheg-box was interfaced to Pythia 8 [12] with a dedicated set of parton-shower-generator parameters (tune) denoted AZNLO [13] and the CT10 NLO parton distribution function (PDF) set [14]. The renormalisation and factorisation scales were set to the Z boson mass. Sherpa predictions used the Comix [15] and OpenLoops [16] matrix element event generators, and the CKKW method was used to combine the various final-state topologies from the matrix element and match them to the parton shower [17]. The matrix elements were merged with the Sherpa parton shower [18] using the ME+PS@LO prescription [19,20], and using Sherpa's native dynamical scale-setting algorithm to set the renormalisation and factorisation scales. Sherpa predictions used the NNPDF30NNLO PDF set [21]. The production of QCD-Z j j events was simulated using three event generators, Sherpa 2.2.1, Alpgen 2.14 [22] and MadGraph5_aMC@NLO 2.2.2 [23]. Sherpa provides Z + n-parton predictions calculated for up to two partons at NLO accuracy and up to four partons at LO accuracy in perturbative QCD. Sherpa predictions used the NNPDF30NLO PDF set together with the tuning of the UEPS parameters developed by the Sherpa authors using the ME+PS@NLO prescription [19,20]. Alpgen is an LO event generator which uses explicit matrix elements for up to five partons and was interfaced to Pythia 6.426 [24] using the Perugia2011C tune [25] and the CTEQ6L1 PDF set [26]. Only matrix elements for lightflavour production in Alpgen are included, with heavy-flavour contributions modelled by the parton shower. MadGraph5_aMC@NLO 2.2.2 (MG5_aMC) uses explicit matrix elements for up to four partons at LO, and was interfaced to Pythia 8 with the A14 tune [27] and using the NNPDF23LO PDF set [28]. For reconstruction-level studies, total Z boson production rates predicted by all three event generators used to produce QCD-Z j j predictions are normalised using the next-to-next-to-leading-order (NNLO) predictions calculated with the FEWZ 3.1 program [29][30][31] using the CT10 NNLO PDF set [14]. However, when comparing particle-level theoretical predictions to detector-corrected measurements, the normalisation of quoted predictions is provided by the event generator in question rather than an external NNLO prediction. The production of a pair of EW vector bosons (diboson), where one decays leptonically and the other hadronically, or where both decay leptonically and are produced in association with two or more jets, through W Z or Z Z production with at least one Z boson decaying to leptons, was simulated separately using Sherpa 2.1.1 and the CT10 NLO PDF set. The largest background to the selected Z j j samples arises from tt and single-top (W t) production. These were generated using Powheg-box v2 and Pythia 6.428 with the Perugia2012 tune [25], and normalised using the cross-section calculated at NNLO+NNLL (next-to-next-to-leading log) accuracy using the Top++2.0 program [32]. All the above MC samples were fully simulated through the Geant 4 [33] simulation of the ATLAS detector [34]. The effect of additional pp interactions (pile-up) in the same or nearby bunch crossings was also simulated, using Pythia v8.186 with the A2 tune [35] and the MSTW2008LO PDF set [36]. The MC samples were reweighted so that the distribution of the average number of pileup interactions per bunch crossing matches that observed in data. For the data considered in this Letter, the average number of interactions is 13.7. Event preselection The Z bosons are measured in their dielectron and dimuon decay modes. Candidate events are selected using triggers requiring at least one identified electron or muon with transverse momentum thresholds of p T = 24 GeV and 20 GeV respectively, with additional isolation requirements imposed in these triggers. At higher transverse momenta, the efficiency of selecting candidate events is improved through the use of additional electron and muon triggers without isolation requirements and with thresholds of p T = 60 GeV and 50 GeV respectively. Candidate electrons are reconstructed from clusters of energy in the electromagnetic calorimeter matched to inner-detector tracks [37]. They must satisfy the Medium identification requirements described in Ref. [37] and have p T > 25 GeV and |η| < 2.47, excluding the transition region between the barrel and end-cap calorimeters at 1.37 < |η| < 1.52. Candidate muons are identified as tracks in the inner detector matched and combined with track segments in the muon spectrometer. They must satisfy the Medium identification requirements described in Ref. [38], and have p T > 25 GeV and |η| < 2.4. Candidate leptons must also satisfy a set of isolation criteria based on reconstructed tracks and calorimeter activity. Events are required to contain exactly two leptons of the same flavour but of opposite charge. The dilepton invariant mass must satisfy 81 < m < 101 GeV. Candidate hadronic jets are required to satisfy p T > 25 GeV and |y| < 4.4. They are reconstructed from clusters of energy in the calorimeter [39] using the anti-k t algorithm [40,41] with radius parameter R = 0.4. Jet energies are calibrated by applying p T -and y-dependent corrections derived from Monte Carlo simulation with additional in situ correction factors determined from data [42]. To reduce the impact of pile-up contributions, all jets with |y| < 2.4 and p T < 60 GeV are required to be compatible with having originated from the primary vertex (the vertex with the highest sum of track p 2 T ), as defined by the jet vertex tagger algorithm [43]. Selected electrons and muons are discarded if they lie within R = 0.4 of a reconstructed jet. This requirement is imposed to remove non-prompt non-isolated leptons produced in heavy-flavour decays or from the decay in flight of a kaon or pion. Definition of particle-level cross-sections Cross-sections are measured for inclusive Z j j production that includes the EW-Z j j and QCD-Z j j processes, as well as diboson events. The particle-level production cross-section for inclusive Z j j production in a given fiducial region f is given by where N f obs is the number of events observed in the data passing the selection requirements of the fiducial region under study at detector level, N f bkg is the corresponding number of expected background (non-Z j j) events, L is the integrated luminosity corresponding to the analysed data sample, and C f is a correction factor applied to the observed data yields, which accounts for experimental efficiency and detector resolution effects, and is derived from MC simulation with data-driven efficiency and energy/momentum scale corrections. This correction factor is calculated as: where N f det is the number of signal events that satisfy the fiducial selection criteria at detector level in the MC simulation, and N f particle is the number of signal events that pass the equivalent selection but at particle level. These correction factors have values between 0.63 and 0.77, depending on the fiducial region. With the exception of background from multijet and W + jets processes (henceforth referred to together simply as multijet processes), contributions to N f bkg are estimated using the Monte Carlo samples described in Section 3. Background from multijet events is estimated from the data by reversing requirements on lepton identification or isolation to derive a template for the contribution of jets mis-reconstructed as lepton candidates as a function of dilepton mass. Non-multijet background is subtracted from the template using simulation. The normalisation is derived by fitting the nominal dilepton mass distribution in each fiducial region with the sum of the multijet template and a template comprising signal and background contributions determined from simulation. The multijet contribution is found to be less than 0.3% in each fiducial region. The contribution from W + jets processes was checked using MC simulation and found to be much smaller than the total multijet background as determined from data. At particle level, only final-state particles with proper lifetime cτ > 10 mm are considered. Prompt leptons are dressed using the four-momentum combination of an electron or muon and all photons (not originating from hadron decays) within a cone of size R = 0.1 centred on the lepton. These dressed leptons are required to satisfy p T > 25 GeV and |η| < 2.47. Events are required to contain exactly two dressed leptons of the same flavour but of opposite charge, and the dilepton invariant mass must satisfy 81 < m < 101 GeV. Jets are reconstructed using the anti-k t algorithm with radius parameter R = 0.4. Prompt leptons and the photons used to dress these leptons are not included in the particle-level jet reconstruction. All remaining final-state particles are included in the particle-level jet clustering. Prompt leptons with a separation R j, < 0.4 from any jet are rejected. The cross-section measurements are performed in the six phase-space regions defined in Table 1. These regions are chosen to have varying contributions from EW-Z j j and QCD-Z j j processes. Event selection Following Ref. [2], events are selected in six detector fiducial regions. As far as possible, these are defined with the same kinematic requirements as the six phase-space regions in which the cross-section is measured (Table 1). This minimises systematic uncertainties in the modelling of the acceptance. The baseline fiducial region represents an inclusive selection of events containing a leptonically decaying Z boson and at least two jets with p T > 45 GeV, at least one of which satisfies p T > 55 GeV. The two highest-p T (leading and sub-leading) jets in a given event define the dijet system. The baseline region is dominated by QCD-Z j j events. The requirement of 81 < m < 101 GeV suppresses other sources of dilepton events, such as tt and Z → τ τ , as well as the multijet background. Because the energy scale of the dijet system is typically higher in events produced by the EW-Z j j process than in those produced by the QCD-Z j j process, two subsets of the baseline region are defined which probe the EW-Z j j contribution in different ways: in the high-mass fiducial region a high value of the invariant mass of the dijet system (m jj > 1 TeV) is required, and in the high-p T fiducial region the minimum p T of the leading and sub-leading jets is increased to 85 GeV and 75 GeV respectively. The EW-Z j j process typically produces harder jet transverse momenta and results in a harder dijet invariant mass spectrum than the QCD-Z j j process. Three additional fiducial regions allow the separate contributions from the EW-Z j j and QCD-Z j j processes to be measured. The EW-enriched fiducial region is designed to enhance the EW-Z j j contribution relative to that from QCD-Z j j, particularly at high m jj . The EW-enriched region is derived from the baseline region requiring m jj > 250 GeV, a dilepton transverse momentum of p T > 20 GeV, and that the normalised transverse momentum balance between the two leptons and the two highest transverse momentum jets satisfy p balance T < 0.15. The latter quantity is given by p balance where p i T is the transverse momentum vector of object i, 1 and 2 label the two leptons that define the Z boson candidate, and j 1 and j 2 refer to the leading and sub-leading jets. These requirements help remove events in which the jets arise from pile-up or multiple parton interactions. The requirement on p balance T also helps suppress events in which the p T of one or more jets is badly measured and it enhances the EW-Z j j contribution, where the lower probability of additional radiation causes the Z boson and the dijet system to be well balanced. The EW-enriched region requires a veto [44] on any jets with p T > 25 GeV reconstructed within the rapidity interval bounded by the dijet system (N interval jet (p T >25 GeV) = 0). A second fiducial region, denoted EWenriched (m jj > 1 TeV), has identical selection criteria, except for a raised m jj threshold of 1 TeV which further enhances the EW-Z j j contribution to the total Z j j signal rate. In contrast, the QCD-enriched fiducial region is designed to suppress the EW-Z j j contribution relative to QCD-Z j j by requiring at least one jet with p T > 25 GeV to be reconstructed within the rapidity interval bounded by the dijet system (N interval In the QCD-enriched region, the definition of the normalised transverse momentum balance is modified from that given in Eq. (2) to include in the calculation of the numerator and denominator the p T of the highest p T jet within the rapidity interval bounded by the dijet system (p balance,3 T ). In all other respects, the kinematic requirements in the EW-enriched region and QCD-enriched region are identical. Detector-level results In the baseline region, 30 686 events are selected in the dielectron channel and 36 786 events are selected in the dimuon channel. The total observed yields are in agreement with the expected yields within statistical uncertainties in each dilepton channel. The largest deviation across all fiducial regions is a 2σ (statistical) difference between the expected to observed ratio in the electron versus muon channel in the high-p T region. The expected composition of the selected data samples in the six Z j j fiducial regions is summarised in Table 2, averaging across the dielectron and dimuon channels as these compositions in the two dilepton channels are in agreement within statistical uncertainties. The numbers of selected events in data and expectations from total signal plus background estimates are also given for each region. The largest discrepancy between observed and expected yields is seen in the high-mass region, and results from a mismodelling of the m jj spectrum in the QCD-Z j j MC simulations used, which is discussed below and accounted for in the assessment of systematic uncertainties in the measurement. Systematic uncertainties in the inclusive Z j j fiducial cross-sections Experimental systematic uncertainties affect the determination of the C f correction factor and the background estimates. The dominant systematic uncertainty in the inclusive Z j j fiducial crosssections arises from the calibration of the jet energy scale and resolution. This uncertainty varies from around 4% in the EWenriched region to around 12% in the QCD-enriched region. The larger uncertainty in the QCD-enriched region is due to the higher average jet multiplicity (an average of 1.7 additional jets in addition to the leading and sub-leading jets) compared with the EW-enriched region (an average of 0.4 additional jets). Other experimental systematic uncertainties arising from lepton efficiencies related to reconstruction, identification, isolation and trigger, and lepton energy/momentum scale and resolution as well as from the effect of pile-up, amount to a total of around 1-2%, depending on the fiducial region. The systematic uncertainty arising from the MC modelling of the m jj distribution in the QCD-Z j j and EW-Z j j signal processes is around 3% in the EW-enriched region, around 1% in the QCDenriched region, 2% in the high-mass region, and below 1% elsewhere. This is assessed by comparing the correction factors obtained by using the different MC event generators listed in Section 3 and by performing a data-driven reweighting of the QCD-Z j j MC sample to describe the m jj distribution of the observed data in a given fiducial region. Additional contributions arise from varying the QCD renormalisation and factorisation scales up and down by a factor of two independently, and from the propagation of uncertainties in the PDF sets. The normalisation of the diboson contribution is varied according to PDF and scale variations in these predictions [45], and results in up to a 0.1% effect on the measured Z j j cross-sections depending on the fiducial region. The uncertainty from varying the normalisation and shape in m jj of the estimated background from top-quark production is at most 1% (in the high-mass region), arising from changes in the extracted Z j j crosssections when using modified top-quark background MC samples with PDF and scale variations, suppressed or enhanced additional Table 2 Estimated composition (in percent) of the data samples selected in the six Z j j fiducial regions for the dielectron and dimuon channels combined, using the EW-Z j j sample from Powheg, and the QCD-Z j j sample from Sherpa (normalised using NNLO predictions for the inclusive Z cross-section calculated with FEWZ). Uncertainties in the sample contributions are statistical only. Also shown are the total expected yields and the total observed yields in each fiducial region. Uncertainties in the total expected yields are statistical (first) and systematic (second), see Section 5.4 for details. [47], from a calibration of the luminosity scale using x-y beam-separation scans performed in June 2015. Inclusive Z j j results The measured cross-sections in the dielectron and dimuon channels are combined and presented here as a weighted average (taking into account total uncertainties) across both channels. These cross-sections are determined using each of the correction factors derived from the six combinations of the three QCD-Z j j (Alpgen, MG5_aMC, and Sherpa) and two EW-Z j j (Powheg and Sherpa) MC samples. For a given fiducial region (Table 1) the crosssection averaged over all six variations is presented in Table 3. The envelope of variation between QCD-Z j j and EW-Z j j models is assigned as a source of systematic uncertainty (1% in all regions except the EW-enriched region where the variation is 3% and the high-mass region where the variation is 2%). The theoretical predictions from Sherpa (QCD-Z j j) + Powheg (EW-Z j j), MG5_aMC (QCD-Z j j) + Powheg (EW-Z j j), and Alpgen (QCD-Z j j) + Powheg (EW-Z j j) are found to be in agreement with the measurements in most cases. The uncertainties in the theoretical predictions are significantly larger than the uncertainties in the corresponding measurements. The largest differences between predictions and measurement are in the high-mass and EW-enriched (m jj > 250 GeV and > 1 TeV) regions. Predictions from Sherpa (QCD-Z j j) + Powheg (EW-Z j j) and MG5_aMC (QCD-Z j j) + Powheg (EW-Z j j) exceed measurements in the high-mass region by 54% and 34% respectively, where the predictions have relative uncertainties with respect to the measurement of 36% and 32%. For the EW-enriched region, Sherpa (QCD-Z j j) + Powheg (EW-Z j j) describes the observed rates well, but MG5_aMC (QCD-Z j j) + Powheg (EW-Z j j) overestimates measurements by 28% with a relative uncertainty of 11%. In the EW-enriched (m jj > 1 TeV) region the same predictions overestimate measured rates by 33% and 57%, with relative uncertainties of 16% and 15%. Some of these differences arise from a significant mismodelling of the QCD-Z j j contribution, as investigated and discussed in detail in Section 6.1. Predictions from Alpgen (QCD-Z j j) + Powheg (EW-Z j j) are in agreement with the data for the high-mass and EW-enriched (m jj > 250 GeV and > 1 TeV) regions. Measurement of EW-Z jj fiducial cross-sections The EW-enriched fiducial region (defined in Table 1) is used to measure the production cross-section of the EW-Z j j process. The EW-enriched region has an overall expected EW-Z j j signal fraction of 4.8% (Table 2) and this signal fraction grows with increasing m jj to 26.1% for m jj > 1 TeV. The QCD-enriched region has an overall expected EW-Z j j signal fraction of 1.6% increasing to 4.4% for m jj > 1 TeV. The dominant background to the EW-Z j j cross-section measurement is QCD-Z j j production. It is subtracted in the same way as non-Z j j backgrounds in the inclusive measurement described in Section 5. Although diboson production includes contributions from purely EW processes, in this measurement it is considered as part of the background and is estimated from simulation. A particle-level production cross-section measurement of EW-Z j j production in a given fiducial region f is thus given by with the same notations as in Eq. (1) and where N f QCD-Zjj is the expected number of QCD-Z j j events passing the selection requirements of the fiducial region at detector level, N f bkg is the expected number of background (non-Z j j and diboson) events, and C f EW is a correction factor applied to the observed background-subtracted data yields that accounts for experimental efficiency and detector resolution effects, and is derived from EW-Z j j MC simulation with data-driven efficiency and energy/momentum scale corrections. For the m jj > 250 GeV (m jj > 1 TeV) region this correction factor is determined to be 0.66 (0.67) when using the Sherpa EW-Z j j prediction, and 0.67 (0.68) when using the Powheg EW-Z j j prediction. Detector-level comparisons of the m jj distribution between data and simulation in (a) the EW-enriched region and (b) the QCDenriched region are shown in Fig. 2. It can be seen in Fig. 2(a) Table 3 Measured and predicted inclusive Z j j production cross-sections in the six fiducial regions defined in Table 1. For the measured cross-sections, the first uncertainty given is statistical, the second is systematic and the third is due to the luminosity determination. For the predictions, the statistical uncertainty is added in quadrature to the systematic uncertainties arising from the PDFs and factorisation and renormalisation scale variations. Fiducial region Inclusive Z jj cross-sections [pb] Measured Prediction value ± stat. ± syst. ± lumi. that in the EW-enriched region the EW-Z j j component becomes prominent at large values of m jj . However, Fig. 2(b) demonstrates that the shape of the m jj distribution for QCD-Z j j production is poorly modelled in simulation. The same trend is seen for all three QCD-Z j j event generators listed in Section 3. Alpgen provides the best description of the data over the whole m jj range. In comparison, MG5_aMC and Sherpa overestimate the data by 80% and 120% respectively, for m jj = 2 TeV, well outside the uncertainties on these predictions described in Table 3. These discrepancies have been observed previously in Z j j [2,48] and Wj j [49-51] production at high dijet invariant mass and at high jet rapidities. For the purpose of extracting the cross-section for EW-Z j j production, this mismodelling of QCD-Z j j is corrected for using a data-driven approach, as discussed in the following. Corrections for mismodelling of QCD-Z j j production and fitting procedure The normalisation of the QCD-Z j j background is extracted from a fit of the QCD-Z j j and EW-Z j j m jj simulated distributions to the data in the EW-enriched region, after subtraction of non-Z j j and diboson background, using a log-likelihood maximisation [52]. Following the procedure adopted in Ref. [2], the data in the QCD-enriched region are used to evaluate detector-level shape correction factors for the QCD-Z j j MC predictions bin-by-bin in m jj . These data-to-simulation ratio correction factors are applied to the simulation-predicted shape in m jj of the QCD-Z j j contribution in the EW-enriched region. This procedure is motivated by two observations: (a) the QCD-enriched region and EW-enriched region are designed to be kinematically very similar, differing only with regard to the presence/absence of jets reconstructed within the rapidity interval bounded by the dijet system, (b) the contribution of EW-Z j j to the region of high m jj is suppressed in the QCD-enriched region (4.4% for m jj > 1 TeV) relative to that in the EW-enriched region (26.1% for m jj > 1 TeV) (also illustrated in Fig. 2); the impact of the residual EW-Z j j contamination in the QCD-enriched region is assigned as a component of the systematic uncertainty in the QCD-Z j j background. The shape correction factors in m jj obtained using the three different QCD-Z j j MC samples are shown in Fig. 3(a). These are derived as the ratio of the data to simulation in bins of m jj after normalisation of the total yield in simulation to that observed in data in the QCD-enriched region. A binned fit to the correction factors derived in dijet invariant mass is performed with a linear fit function (and also with a quadratic fit function) to produce a continuous correction factor. The linear fit is illustrated overlaid on the binned correction factors in Fig. 3(a). The nominal value of the EW-Z j j cross-section corresponding to a particular QCD-Z j j event generator template is determined using the correction factors from the linear fit. The change in resultant EW-Z j j cross-section from using binned correction factors directly is assessed as a systematic uncertainty. The change in the extracted EW-Z j j cross-section when using a quadratic fit was found to be negligible. The variations observed between event generators may be partly due to differences in the modelling of QCD radiation within the rapidity interval bounded by the dijet system, which affects the extrapolation from the central-jet-enriched QCD-enriched region to the central-jet-suppressed EW-enriched region. The variation between event generators is much larger than the effect of PDF and scale uncertainties in a particular prediction (indicated in Fig. 3(a) by a shaded band on the predictions from Sherpa). Estimating the uncertainties associated with QCD-Z j j mismodelling from PDF and scale variations around a single generator prediction would thus result in an underestimate of the true theoretical uncertainty associated with this mismodelling. In this measurement, the span of resultant EW-Z j j cross-sections extracted from the use of each of the three QCD-Z j j templates is assessed as a systematic uncertainty. The variation in the EW-Z j j cross-section measurement due to a change in the EW-Z j j signal template used in the derivation of the m jj correction factors (from Powheg to Sherpa) is found to be negligible. To test the dependence of the QCD-Z j j correction factors on the modelling of any additional jet emitted in the dijet rapidity interval, the QCD-enriched control region is divided into pairs of mutually exclusive subsets according to the |y| of the highest p T jet within the rapidity interval bounded by the dijet system, the p T of that jet, or the value of N interval jet (p T >25 GeV) . The continuous correction factors are determined from each subregion using both a linear and a quadratic fit to the data. Correction factors derived in the subregions using quadratic fits result in the largest variation in the extracted cross-sections. These fits are shown in Fig. 3(b) for the Alpgen QCD-Z j j sample, which displays the largest variation between subregions of the three event generators used to produce QCD-Z j j predictions. Within statistical uncertainties the measured EW-Z j j cross-sections are not sensitive to the definition of the control region used. The normalisations of the corrected QCD-Z j j templates and the EW-Z j j templates are allowed to vary independently in a fit to the background-subtracted m jj distribution in the EW-enriched region. The measured electroweak production cross-section is determined from the data minus the QCD-Z j j contribution determined from these fits (Eq. (3)). As the choice of EW-Z j j template can influence the normalisation of the QCD-Z j j template in the EW-enriched region fit, the measured EW-Z j j cross-section determination is repeated for each QCD-Z j j template using either the Powheg or Sherpa EW-Z j j template in the fit. The central value of the result quoted is the average of the measured EW-Z j j cross-sections determined with each of the six combinations of the three QCD-Z j j and two EW-Z j j templates, with the envelope of measured results from these variations taken as an uncertainty associated with the dependence on the modelling of the templates in the EWenriched region. Separate uncertainties are assigned for the determination of the QCD-Z j j correction factors in the QCD-enriched region and their propagation into the EW-enriched region. The measurement of the EW-Z j j cross-section in the EW-enriched region for m jj > 1 TeV is extracted from the same fit procedure, with data and QCD-Z j j yields integrated for m jj > 1 TeV. Fig. 4(a) shows a comparison in the EW-enriched region of the fitted EW-Z j j and m jj -reweighted QCD-Z j j templates to the background-subtracted data, from which the measured EW-Z j j cross-section is extracted. Fig. 4(b) demonstrates how the data in the EW-enriched region is modelled with the fitted EW-Z j j and m jj -reweighted QCD-Z j j templates, for the three different QCD-Z j j event generators (and their corresponding correction factors derived in the QCD-enriched region shown in Fig. 3(a)). Despite significantly different modelling of the m jj distribution between event generators, and different models for additional QCD radiation, the results of the combined correction and fit procedure give a consistent description of the data. Systematic uncertainties in the EW-Z j j fiducial cross-section The total systematic uncertainty in the cross-section for EW-Z j j production in the EW-enriched fiducial region is 17% (16% in the EW-enriched m jj > 1 TeV region). The sources and size of each systematic uncertainty are summarised in Table 4. Systematic uncertainties associated with the EW-Z j j signal template used in the fit and EW-Z j j signal extraction are obtained from the variation in the measured cross-section when using either of the individual EW-Z j j MC samples (Powheg and Sherpa) Fig. 4. (a) Comparison in the EW-enriched region of the sum of EW-Z j j and m jj -reweighted QCD-Z j j templates to the data (minus the non-Z j j backgrounds). The normalisation of the templates is adjusted to the results of the fit (see text for details). The EW-Z j j MC sample comes from the Powheg event generator and the QCD-Z j j MC sample comes from the Alpgen event generator. (b) The ratio of the sum of the EW-Z j j and m jj -reweighted QCD-Z j j templates to the background-subtracted data in the EW-enriched region, for three different QCD-Z j j MC predictions. The normalisation of the templates is adjusted to the results of the fit. Error bars represent the statistical uncertainties in the data and combined QCD-Z j j plus EW-Z j j MC samples added in quadrature. The hatched band represents experimental systematic uncertainties in the m jj distribution. Table 4 Systematic uncertainties contributing to the measurement of the EW-Z j j cross-sections for m jj > 250 GeV and m jj > 1 TeV. Uncertainties are grouped into EW-Z j j signal modelling, QCD-Z j j background modelling, QCD-EW interference, non-Z j j backgrounds, and experimental sources. Source Relative compared to the average of the two, taken as the central value. Uncertainties in the EW-Z j j templates due to variations of the QCD scales, of the PDFs, and of the UEPS model are also included as are statistical uncertainties in the templates themselves. Following the extraction of the EW-Z j j cross-section in the EWenriched regions, the normalisations of the EW-Z j j MC samples are modified to agree with the measurements and the potential EW contamination of the QCD-enriched region is recalculated, which leads to a modification of the QCD-Z j j correction factors. The EW-Z j j cross-section measurement is repeated with these modified QCD-Z j j templates and the change in the resultant crosssections is assigned as a systematic uncertainty associated with the EW-Z j j contamination of the QCD-enriched region. As discussed in Section 6.1, the use of a QCD-enriched region provides a way to correct for QCD-Z j j modelling issues and also constrains theoretical and experimental uncertainties associated with observables constructed from the two leading jets. Neverthe-less, the largest contribution to the total uncertainty arises from modelling uncertainties associated with propagation of the m jj correction factors for QCD-Z j j in the QCD-enriched region into the EW-enriched region, and these correction factors depend on the modelling of the additional jet activity in the QCD-Z j j MC samples used in the measurement. The uncertainty is assessed by repeating the EW-Z j j cross-section measurement with m jj -reweighted QCD-Z j j MC templates from Alpgen, MG5_aMC, and Sherpa, and assigning the variation of the measured cross-sections from the central EW-Z j j result as a systematic uncertainty. Statistical uncertainties from data and simulation in the m jj correction factors derived in the QCD-enriched region are also propagated through to the measured EW-Z j j cross-section as a systematic uncertainty. Uncertainties associated with QCD renormalisation and factorisation scales, PDF error sets, and UEPS modelling are assessed by studying the change in the extracted EW-Z j j cross-sections when repeating the measurement procedure, including rederiving m jj correction factors in the QCD-enriched region and repeating fits in the EW-enriched region, using modified QCD-Z j j MC templates. Statistical uncertainties in the QCD-Z j j template in the EWenriched region are also propagated as a systematic uncertainty in the EW-Z j j cross-section measurement. Potential quantum-mechanical interference between the QCD-Z j j and EW-Z j j processes is assessed using MG5_aMC to derive a correction to the QCD-Z j j template as a function of m jj . The impact of interference on the measurement is determined by repeating the EW-Z j j measurement procedure twice, either applying this correction to the QCD-Z j j template only in the QCD-enriched region or only in the EW-enriched region and taking the maximum change in the measured EW-Z j j cross-section as a symmetrised uncertainty. This approach assumes the interference affects only one of the two fiducial regions and therefore has a maximal impact on the signal extraction. Potential interference between the Z j j and diboson processes was found to be negligible. Normalisation and shape uncertainties in the estimated background from top-quark and diboson production are assessed with varied background templates as described in Section 5.4, albeit with significantly larger uncertainties in the EW-enriched fiducial region compared to the baseline region. Experimental systematic uncertainties arising from the jet energy scale and resolution, from lepton efficiencies related to reconstruction, identification, isolation and trigger, and lepton energy/momentum scale and resolution, and from pile-up modelling, are independently assessed by repeating the EW-Z j j measurement procedure using modified QCD-Z j j and EW-Z j j templates. Here, the QCD-enriched QCD-Z j j template constraint procedure described in Section 6.1 has the added benefit of significantly reducing the jet-based experimental uncertainties, as can be seen in Table 4 from their small impact on the total systematic uncertainty. Electroweak Z j j results As in the inclusive Z j j cross-section measurements, the quoted EW-Z j j cross-section measurements are the average of the crosssections determined with each of the six combinations of the three QCD-Z j j MC templates and two EW-Z j j MC templates. The measured cross-sections for the EW production of a leptonically decaying Z boson and at least two jets satisfying the fiducial requirements for the EW-enriched regions as given in Table 1 with the requirements m jj > 250 GeV and m jj > 1 TeV are shown in Table 5, where they are compared to predictions from Powheg+Pythia. The use of a differential template fit in m jj to extract the EW-Z j j signal allows systematic uncertainties on the EW-Z j j cross-section measurements to be constrained by the bins with the most favourable balance of EW-Z j j signal purity and minimal shape and normalisation uncertainty. For the m jj > 250 GeV region, although all m jj bins contribute to the fit, the individually most-constraining m jj interval is the 900-1000 GeV bin. The use of this method results in very similar relative systematic uncertainties in the EW-Z j j crosssection measurements at the two different m jj thresholds, despite the measured relative EW-Z j j contribution to the total Z j j rate for m jj > 1 TeV being more than six times the relative contribution of EW-Z j j for m jj > 250 GeV. The EW-Z j j cross-sections at √ s = 13 TeV are in agreement with the predictions from Powheg+Pythia for both m jj > 250 GeV and m jj > 1 TeV. The effect on the measurement of inclusive Z j j production rates (Section 5.5) from correcting the EW-Z j j production rates predicted by Powheg+Pythia to the measured rates presented here was found to be negligible. Modifications to the m jj distribution shape are already accounted for as a systematic uncertainty in the inclusive Z j j measurements. In the EW-enriched region, for m jj thresholds of 250 GeV and 1 TeV, the measured EW-Z j j cross-sections at 13 TeV are found to be respectively 2.2 and 3.2 times as large as those measured at 8 TeV, as illustrated in Fig. 6. Table 5 Measured and predicted EW-Z j j production cross-sections in the EW-enriched fiducial regions with and without an additional kinematic requirement of m jj > 1 TeV. For the measured cross-sections, the first uncertainty given is statistical, the second is systematic and the third is due to the luminosity determination. For the predictions, the quoted uncertainty represents the statistical uncertainty, plus systematic uncertainties from the PDFs and factorisation and renormalisation scale variations, all added in quadrature. ± 5.8 ± 5.5 ± 0.7 3 8 .5 ± 1.5 Summary Fiducial cross-sections for the electroweak production of two jets in association with a leptonically decaying Z boson in protonproton collisions are measured at a centre-of-mass energy of 13 TeV, using data corresponding to an integrated luminosity of 3.2 fb −1 recorded with the ATLAS detector at the Large Hadron Collider. The EW-Z j j cross-section is extracted in a fiducial region chosen to enhance the EW contribution relative to the dominant QCD-Z j j process, which is constrained using a data-driven approach. The measured fiducial EW cross-section is σ Zjj s allows a region of higher dijet mass to be explored, in which the EW-Z j j signal is more prominent. The Standard Model predictions are in agreement with the EW-Z j j measurements. The inclusive Z j j cross-section is also measured in six different fiducial regions with varying contributions from EW-Z j j and QCD-Z j j production. At higher dijet invariant masses (> 1 TeV), particularly crucial for precision measurements of EW-Z j j production and for searches for new phenomena in vector-boson fusion topologies, predictions from Sherpa (QCD-Z j j) + Powheg (EW-Z j j) and MG5_aMC (QCD-Z j j) + Powheg (EW-Z j j) are found to significantly overestimate the observed Z j j production rates in data. Alpgen (QCD-Z j j) + Powheg (EW-Z j j) provides a better description of the m jj shape. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-
10,644.2
2017-01-01T00:00:00.000
[ "Physics" ]
Rapid Navigation Function Control for Two-Wheeled Mobile Robots This paper presents a kinematic controller for a differentially driven mobile robot. The controller is based on the navigation function (NF) concept that guarantees goal achievement from almost all initial states. Slow convergence in some cases is a significant disadvantage of this approach, especially when narrow passages exist in the environment and/or specific values of design parameters are set. The main reason of this phenomenon is that the velocity control strongly depends on the slope of the NF. The algorithm proposed in this paper is based on a method introduced in Urakubo (Nonlin. Dyn. 81(3): 1475–1487 2015), that extends NF to nonholonomic mobile platforms and allows stabilizing not only the position of robots but also their orientation. This algorithm is used as a reference in experimental performance comparison. In the new algorithm, the gradient of the NF is used to generate motion direction but the velocity is computed as a function of position and orientation errors. This approach results in much better state converge. Analysis of the convergence shows how the location of the eigenvalues of linearized system affects time of goal achievement. The paper describes saddle point detection and avoidance methodology and presents their experimental verification. It also shows what happens in practice if initial position is located exactly in the saddle point and its detection/avoidance procedures are turned off. Introduction The breakthrough idea to use artificial potential fields to control manipulators and mobile robots was introduced by Khatib [4] in 1986. In this approach both attraction to the goal and repulsion from the obstacles are negated gradients of the artificial potential functions. The original solution had one major drawback: local minima may occur depending on the configuration of obstacles in the task space and inadequate potential functions choice. The problem is caused by the fact that attracting and repulsive components of the control are combined by simple addition. In some states which are far from the target location the addition of attracting and repulsive vectors may result in zero vector. gradient cannot be used directly to control the robot like in [13,14]. It is used as a reference for a nonlinear controller that generates linear and angular velocities applied to the mobile platform. In contrast to the Rimon and Koditschek algorithm the orientation also converges to the desired value. Convergence proof was enclosed in the paper. The author of this paper together with its co-authors conducted extensive tests of this algorithm including simulations for sphere worlds [10], star worlds [6], and experiments for sphere worlds [7][8][9]. The beginning of the 21st century saw publication of many propositions of extension of the NF approach for the groups of mobile robots [2,15,16]. The publications addressed the problem of collision avoidance in multi-agent robotic systems. The first paper introduces multi-robot navigation function (MRNF). The second and third use prioritization to solve conflicts in the case of concurrent goals. In the [11] a new NF is proposed that can be computed locally, and only the knowledge of the target and obstacles in the robots' neighborhood is required. In [17,18] NF is used to control aircrafts. This work is motivated by the poor time performance of known NF control methods. This is caused by the direct dependency of the velocity control on the module of the NF gradient. The gradient depends on many conditions, including a number of obstacles, their relative positions, obstacle functions that avoid collisions, goal location, and some design parameters that must be tuned to avoid local minima. As a result, the module of the gradient vector is not usually the best choice for the velocity control. The main contribution of this paper is to propose modification of the method described in [20] that makes the convergence to desired coordinates much faster. The convergence of closed-loop system is analyzed by linearization in the neighborhood of the critical point. It is shown that by properly selecting the new design parameters, the real parts of eigenvalues of the linearized system can be easily moved to greater negative values. This results in faster convergence of the system to the desired values. The effectiveness of the proposed method is verified by experiments. In addition, experimental verification of the NF saddle point detection and avoidance is included. This kind of results has never been published previously to the best of the author's knowledge. Section 2 introduces the model of the differentially driven mobile platform and the environment. Section 3 presents control algorithm. In Section 4 convergence proof is given. Section 5 describes method of saddle point detection and avoidance. Section 6 presents experimental test-bed and results of experiments that illustrate how convergence of the new algorithm has improved in comparison to the reference algorithm. Detection of the saddle point and algorithm that drives the robot out of this point is presented. The final section includes conclusion. Model of the System A kinematic model of a differentially driven mobile robot is given by the following equation: where vector q [x y θ] denotes the pose, x, y are position coordinates and θ is orientation of the platform expressed in a global, fixed coordinate frame. Vector u v ω is the control vector with v denoting linear velocity and ω denoting angular velocity of the robot. The obstacles are circle-shaped and modeled with obstacle function: where ρ i is the radius of the i-th obstacle (i = 1, ..., N), p i represents its center, r = [x y] is the location of the robot and N is the number of obstacles. The obstacle function has a zero value at the boundary of the obstacle and increases if the distance to the obstacle grows. It must be at least twice differentiable. The task space has an external boundary described by the additional obstacle function that can be considered as representing one more obstacle denoted in the further equations with index zero: where ρ 0 is the radius of the task space and p 0 is the center of the task space (usually the origin of the global coordinate frame). Control Algorithm The goal is to stabilize robot in the origin with orientation equal to zero. The total NF is as follows: where κ is a positive, constant design parameter and Design parameter k w in Eq. 5 is a positive constant that allows tuning the function representing the importance of the orientation with respect to the Euclidean distance to the target. A common obstacle function is obtained by the aggregation of individual obstacle functions with the following equation: The internal obstacles are represented by i = 1, ..., N while index zero denotes the task space boundary. The new control algorithm is as follows: where a is a positive, constant design parameter, and Design parametersb, e , f in Eqs. 8 and 9 are positive constants, L [sin θ − cos θ 0] and g ||B ∇V || ≥ 0. Function h(g) is as follows: where g is a small positive constant. Note that h(g) is non-decreasing function and h(0) = 0. Finally, ∇V denotes the gradient of the navigation function with respect to variables x, y and θ . Regardless of the number of obstacles, the gradient can be obtained in the analytical form as: where and As noted in [14] all undesired local minima of the NF (4) disappear as the parameter κ is increased. An algorithm for automatically tuning κ for sphere worlds is presented in [3]. The tuning parameter must satisfy a lower bound to ensure convergence to the desired value. For the sufficiently high value of the parameter κ NF (4) has a critical point associated with each isolated obstacle, the saddle point. V has no other critical points other than these points and the goal. Saddle points are unstable equilibrium points. Urakubo et al. [21] proposes a special control procedure for saddle point avoidance. It uses a time varying trigonometric function to push the robot away from the unstable equilibrium point. In Section 5, this method is recalled together with a detailed description of saddle point detection. Section 6 shows the results of experimental verification. The NF V (4) has the following properties: -V is at least twice continuously differentiable and all elements of ∇V and Hessian matrix ∂(∇V )/∂q are bounded -V ≥ 0, and V = 0 ⇔ q = 0 -the critical points of V are a minimum at q = 0 and saddle points associated with the obstacles; V does not have any other critical points -V is a Morse function -the Hessian matrix is nonsingular at the critical points Convergence of the Closed-Loop System This section presents analysis of the system convergence. It only includes steps that differ from the analysis shown in [20]. Substituting (7) with (1) one obtains: As the above equation is not differentiable it is modified as follows to get a differentiable one: Note that the differentiable model of the closed-loop system (15) is needed for the further analysis only. It is not used in the implementation. For → 0 (15) approaches (14). By linearizing (15) in the neighborhood of equilibrium point q = q 0 ∈ H where set H = {q | B ∇V = 0} one obtains The A(q 0 ) matrix is computed taking into account that ||B ∇V || → 0 as t → ∞. This condition can be rewritten as ( ∂V ∂x cos θ + ∂V ∂y sin θ) 2 + ( ∂V ∂θ ) 2 → 0. This leads to the conclusion that ∂V ∂θ → 0 and ∂V ∂x → 0 as the system approaches the equilibrium point. Moreover, assuming that the nominator of the fraction on the right hand side of Eq. 8 L ∇V → 0 as t → ∞ (that may be rewritten as follows: ∂V ∂x sin θ − ∂V ∂y cos θ → 0) and knowing that ∂V ∂x → 0, θ → 0 one can state that ∂V ∂y → 0. Summarizing, due to the fact that ∇V → 0, the closed-loop system is asymptotically stable to the equilibrium point (lim t→∞ x, y, θ = 0). Notice that as the denominator of the fraction on the right hand side of Eq. 8 tends to zero (h(g) → 0) as t → ∞, the nominator must convergence to zero (L ∇V → 0) faster then the denominator to avoid b going to infinity and violating controllability of the system. For the further analysis the control (7) can be rewritten as follows: wherē The vectorsĪ andJ can be expressed as follows: (19) and Note that vectorsĪ and −J are perpendicular because their scalar product is zero:Ī · (−J ) = 0. It follows that the control is a composition of two perpendicular vectorsĪ and J scaled byγ a andγ b, respectively. Scalar b given by Eq. 8 is computed using Lie bracket The third vector filed ad b 1 b 2 fulfills controllability condition: Nonlinear control system is controllable. Assuming that only first part of the controlĪ is active (J = 0) Lyapunov function decreases and stops in the set {x, y, θ} ∈ : ( ∂V ∂x cos θ + ∂V ∂y sin θ) 2 + ( ∂V ∂θ ) 2 = 0 that is equivalent to ∂V ∂x = 0 and ∂V ∂θ = 0. Second component of the control (that contains b) makes the effect of the Lie bracket L that is responsible for convergence in the y coordinate. If both components of the control are active then V is decreasing and ∂V ∂q = 0 because the system is controllable. The condition ∇V = 0 leads to the conclusion thatq = 0 because the system is in set g = 0. It minds that x = 0, y = 0 and θ = 0. These intuitive arguments are extended to formal stability analysis that is presented in [20]. The matrix A(q 0 ) is given by the equation: In above equation b is as follows: The three eigenvalues of A(q 0 ) are as follows: Eigenvalues λ 2,3 differ from the ones obtained in [20] only by the coefficient ( e / 2 f ). All the conclusions for linearized system presented in [20] remain the same for the modified algorithm, in particular the necessary conditions for an equilibrium point to be stable are: If the system has a zero eigenvalue, there exists in the neighborhood of 0 a function V (q) such that ∂V ∂qq is negative for q = 0 and which has the further property it remains negative definite for some reasonable class of perturbations [1]. The experiments carried out seem to confirm that in presented analysis the higher order terms (rejected as a result of approximation) belong to this class because the behavior of the closed-loop system, even for the states far from the equilibrium point, is consistent with the results of the analysis based on the linear approximation. This local analysis seems to be applicable to the initial state points being quite far from the equilibrium point. The eigenvector corresponding to the zero eigenvalue lies in the tangent space of H set at the equilibrium point. Velocityq in the direction of the eigenvector of the zero eigenvalue cannot be generated, even in nonlinear system (14). The motion in that direction is restricted due to a nonholonomic constraint. The two non-zero eigenvalues determine stability of the equilibrium point. It can be clearly seen from the analysis of λ 2,3 eigenvalues that by tuning e and f parameters the term ( e / 2 f ) can be increased. This raises both real and imaginary components of these two eigenvalues. The real parts are relocated to the left (negative) side of the complex plane, an operation that allows better convergence of the system. The eigenvalues can be ensured to have only real parts by fulfilling the condition: 2a This condition is fulfilled if the following, simpler inequalities are met: In general, designing NF to fulfill these conditions may be quite difficult; however, they are not necessary to achieve asymptotic stability. They assure non-oscillatory behavior of the linearized system, which is a desired property in most applications. If the robot is far from the equilibrium point the linear approximation may not be accurate, and thus, the improvement of the convergence is deduced by analyzing nonlinear control (7). In the NF V (4) function of distance to the goal C (5) and obstacles β (6) is mapped from (0, ∞) to (0, 1) -the counterdomain of the V . As a result, the influence of the distance to the goal on the control intensity (mainly for the locations far from the goal) is reduced. In the proposed modification the control is extended by including the term (||q || + e ) that introduces dependency of control intensity on the distance to the goal. The term in the denominator (g 2 + f ) intensifies the control for small ∇V values (and thus B ∇V ). The described observations were confirmed by simulations and experiments but of course this is not formal analysis. Properties of the NF given in Section 3 are conducive to extend the region in which desired attributes of the linear approximation (resulting from the analysis of the eigenvalues) remain in force. For improperly chosen e and f , the proposed algorithm may also characterize slow convergence, especially for distances far from the goal. Numerical tests show that even for these cases (far from the goal) the influence of ( e / 2 f ) was consistent with the conclusions resulting from the analysis of the linearized system: by increasing ( e / 2 f ) the convergence could be significantly improved. Of course, it is not guaranteed that this property always holds in the nonlinear system. Saddle Points Saddle points are unstable equilibrium points that occur when the attracting vector is balanced by the collision avoidance vector. There is one such point associated with each obstacle. From the theoretical point of view the saddle point is a zero measure set. One might suspect that the designer does not have to deal with this problem; however, it is not obvious how the behavior of the robot is affected in a real application by the saddle point. Some researchers states that saddle points present no problem in real world robotics since being trapped by saddle point can only occur in infinite precision dynamics [19,22]. On the other hand, measurement quantization may cause the saddle point to expand to the some area surrounding its exact, theoretical location. In this section some observations made during experiments are presented. It should be noted that behavior of the system may differ in other test-beds. Especially resolution of localization system, its precision, noises, repeatability and resolution of the actuator controllers may influence the system behavior. Saddle Point Detection The detection of the saddle point is based on analysis of the gradient ∇V and Hessian of the navigation function H (V ). The saddle point is detected if two conditions are fulfilled: 1. ∇V = 0 (in practice ||∇V || < s , where s -small positive constant), 2. one eigenvalue of H (V ) is negative. In Section 6 detailed analysis of the influence of parameter s on the process of saddle point detection and avoidance is presented. The Hessian of the navigation function is as follows: The Hessian of the attraction function is given by the following equation: Hessian of the obstacle function is given by equation: Control in the Saddle Point The control given by Eq. 7 is temporarily replaced with the following one [21]: where a 1 , a 2 , w and b 1 are chosen such that [b 1 a 1 a 2 2w 0] T is parallel to the H (V ) eigenvector of the negative eigenvalue and | a 1 w | 1, | a 2 w | 1 and |b 1 | 1. In [21], a detailed analysis of the system with this control is given. It is shown that by moving along the eigenvector of the negative eigenvalue and taking into account that the value of the NF in the area to which the robot is driven is less than in the saddle point the proper behavior is obtained: the robot does not approach the saddle point and it converges to the goal. Experiments In this section experimental test-bed is described and results of conducted tests are presented. Experimetal Setup The mobile platform that was used in the presented experiments is the differentially driven MTracker robot (Fig. 1). It wast designed at Poznan University of Technology. It is controlled by a two-level hardware controller: the low-level motion controller uses the signal processor TMS320F28335 150MHz and the high-level one is a single-core Intel Atom 1.2GHz board, equipped with WiFi radio used for remote management, task setting and communication with the external localization system. Depending on the requirements the high-level controller works on the Linux Ubuntu or Windows XP operating systems. The MTracker robot is a small platform: its diameter is 0.14m, its height is 0.13m, its weight is 1.4kg and its wheels have a diameter of 0.05m. The on-board power supply is LiIo 3.7Ah battery that allows two hour active operation. During the test the robot is localized by the OptiTrack motion capture system. On the top of the robot four infra-red reflecting markers were mounted. Wheel velocity control signals were scaled down when their value(s) exceeded the limit. This limit was set to 12rd/s, while the physical limitation of actuators is 24rd/s. The lower value of the limit prevented robot wheels from longitudinal slip. The obstacles were known a priori to prevent the influence of measurement inaccuracies on the experimental results. A special scaling procedure is applied to the wheel controls. The desired wheel velocities are scaled down when at least one of the wheels exceeds the assumed limitation. The scaled control signal u s is calculated as follows: [20] where and where ω r , ω l denote right and left wheel angular velocity, ω max is the predefined maximal allowed angular velocity for each wheel. This scaling procedure preserves the direction of motion of the mobile platform. New Algorithm Comparison This section discusses results of experiments. In all the presented cases the desired state was Figures 2b and 3b show time graphs of position and orientation coordinates. It can be seen that the new algorithm drives system error to zero in 15s while the algorithm from [20] requires 30s to reach this state. The controls generated by the new algorithm are larger by the order of magnitude (Figs. 2c and 3c). As a result, also the controls of robots wheels are much higher (Figs. 2d and 3d). After transformation by the scaling procedure the control signal goes into saturation in the beginning and stays in this state until 12.5s (Figs. 2e and 3e). In the reference algorithm control was saturated a few times but only during short time intervals. To summarize, both algorithms produce a similar path shape but the new algorithm results in twice faster convergence. As shown in Kowalczyk [5], for the initial location further from the origin the improvement may even be much better. In some cases, the new algorithm works properly even in the cases in which the reference algorithm may be considered useless. Note that velocity limits on actuators also reduce the observed improvement. Applying the new algorithm is more beneficial for larger κ that must be increased as narrower passages between obstacles exist in the task space. Figure 4b presents time graphs of position and orientation coordinates. They converge to the desired values in 17s. Figure 4c and d show time graphs of controls for the platform and wheels after scaling procedure. Figure 4e presents activity of the saddle point procedure. As can be observed it is active by the first 0.5s of the experiment. The next experiment was conducted for s = 0.05. In this case the detection areas surrounding the saddle points are much larger, marked with dashed lines in Fig. 5a. Figure 5b shows time graph of robots coordinates. Greater s resulted in slower convergence (2s more). Figure 5c shows linear and angular controls for the platform. Figure 5d presents robot wheel controls after scaling. Figure 5e shows the saddle avoidance activation variable. As one can see, the robot was got into the avoidance area four times in the first 4s of the experiment. The last experiment was conducted with the saddle point avoidance procedure turned off. Figure 6a shows the (x, y)-path. Figure 6b presents the initial part of the path. It can be noted that the platform oscillates around the saddle point and finally leaves its surroundings and goes to the desired location. Figure 6c shows the time graph of the position and orientation coordinates. Note that largest oscillation amplitude can be observed in orientation signal (±0.25rd). After 16s the robot leaves the trap. Oscillations are also observed in angular velocity control signal (Fig. 6d -dashed lines). Figure 6e shows wheel controls after scaling. By the first 16s wheel controls switch between positive and negative velocity limits. Note that whether the robot was driven left or right side of the obstacle depended on the time variable in Eq. 27. The best convergence time (that was mostly dependent on the time of leaving the saddle point) was obtained for s = 0.01. For much larger or smaller values of s the convergence was slower. Conclusions Navigation function algorithms are used for set-point control of the vehicles moving in 2D and 3D environments, with and without nonholonomic constraints, with sphere-shaped obstacles, star-shaped obstacles and more complex environments in which obstacles can be modeled as trees-of-stars. These methods can be tuned to ensure one global minimum of the NF; however, in general they do not guarantee fast convergence to the target. This paper proposes a rapidly converging NF control algorithm. The convergence of the closed-loop system is analyzed. The advantages of the new algorithm are significant for greater values of the κ parameter that must be increased if narrow passages between obstacles exist in the task space. The method was tested by experiments on an actual robot. It provides not only better convergence time, but also rapid convergence in cases in which other methods produce questionable results. The experiments presented illustrate clear improvement. The paper also includes experimental verification of the robots behavior in the saddle point. It was tested for the saddle point procedure turned off and for two different sizes of the saddle point detection regions. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
5,838
2018-06-14T00:00:00.000
[ "Computer Science" ]
Possibilities of Energy Utilization of Waste from the Automotive Industry at Present . The car is now a consumer product and its use has significantly decreased over the last decades. According to the sources of the Ministry of the Internal Affairs of the Slovak Republic, is to 31 January 2020 registered 3.286.258 motor vehicles and of which 2.395.362 are passenger cars, which means that per slovak citizen there is something below 0.5 of passenger vehicle. Used passenger cars make up the largest part of the car waste. This paper deals with how the waste from cars after their lifecycle in currently operating energy facilities. Introduction The sector of energy evaluation of waste has undergone rapid technological development over the last 10 to 15 years. Many of the changes in this sector have been triggered by sector-specific legislation, which has led in particular to a reduction in air emissions from individual plants. The development process is continuous, with the development of techniques in this sector that limit costs and at the same time maintain or improve care for the environment. The aim of waste incineration, together with most waste management methods, is to treat waste in such a way as to reduce its volume and hazard, while trapping (and thus concentrating) or decomposing potentially hazardous substances. Incineration processes can also be a means of recovering energy, minerals or chemicals contained in waste. Product of pyrolysis is more valuable fuel. Energy usable waste from automotive Plastics: usable parts are used as spare parts in direct sales to customers. According to the agreement of the Association of European Manufacturers, each plastic part in a car weighing more than 100 g is marked with a code that makes it possible to identify the type of plastic and thus make it easier to ensure recycling or decide on its suitability for energy use. Residues from cable scrap can also be included in the category of plastic waste, these are formed by cable insulation, terminals, connectors, etc. Rubber: used tire casings are processed by crushing. Rubber crumb is used as a substitute for conventional fuels, for example in cement plants. Old tires contain 18 -20 % steel cord and fabric. Seat padding: Polyurethane foam is torn and, after re-pressing, used as a thermal or cushioning material, or as a replacement for a green roof substrate and as part of a "Solid Alternative Fuel. Waste oil and leather: they are suitable for energy use -incineration in a municipal impact incinerator or other technology with a combustion plant (e.g. cement furnace). Synthetic fabrics and covers: make up approximately 2.5 % of the weight of the vehicle, after tearing they are suitable as a semi-finished product for collection for further processing or for energy use. Other unusable waste: depending on its composition, it can still be used for energy using suitable technology (e.g. pyrolysis) and only in the last step are the residues destined for disposal to a municipal waste landfill. Reasons for energy recovery of waste from the automotive industry Energy recovery of waste from the automotive industry brings great benefits:  Obtaining the so-called "Alternative energy", which saves primary energy sources,  The possibility of excluding waste from the automotive industry from landfills,  The possibility of disposing of residual waste from the automotive industry (waste after separation of secondary raw materials),  Minimization of the volume of waste after final disposal (only up to 10% of ash remains from the waste, depending on the composition of the original waste before energy recovery). Disadvantages of energy recovery of waste from the automotive industry:  The waste must be suitably treated before energy recovery according to the used energy recovery technology,  It is necessary to ensure continuous measurement of the composition of waste and, alternatively, to change the settings of the conditions of energy recovery of waste,  At the disposal of harmful emission constituents, it is necessary to use investment intensive additional equipment. The ultimate technologies for energy recovery of waste from the automotive industry are incineration, gasification, pyrolysis and plasma gasification, most preferably is, if all technologies are in conjunction with cogeneration. Cogeneration production of heat and electricity at least partially covers the high energy intensity of car wreck recycling (the input of technological lines, especially shredding is several hundred kilowatts). Possible ways of energy recovery of waste from the automotive industry Incineration is recommended by EU directives as one of the methods of disposing of waste from the automotive industry, which, under specified conditions, meets strict environmental protection requirements. The incineration of waste can be considered advantageous from the point of view of its energy use and is justified especially in the case of such waste that burns without additional fuel. It is not always possible to ensure this condition (either from the point of view of a small amount of waste or from the point of view of the chemicalphysical properties of the waste), therefore it is appropriate to create fuel-waste mixtures most often together with conventional fuels. In the case of solid waste (plastics, rubber, textiles, paper, etc.), these are mixtures together with coal, biomass, etc., thus creating a solid alternative fuel. Solid alternative fuel is a material that results from the separation and treatment of waste materials composed of plastics, paper, textiles, rubber and other combustible substances. It is a crushed mixture of substances from selected industrial and sorted municipal waste, which has a clearly defined composition of substances and a determined particle size distribution, which results in a fuel mixture of regularly controlled parameters with a minimum content of hazardous waste and waste contaminated with hazardous substances. For the production of solid alternative fuel, the physical and chemical properties of input raw materials and types of waste according to the Waste Catalog, which meet the requirements of the plant in which they will be incinerated, ie. their operation will be in accordance with the valid legal regulation in this area of environmental protection. More types of waste are usually used for the production of solid alternativ fuel, e.g. mixed plastics, paper, cardboard, textiles, textile fiber, carpets, rubber, tires, wood, chipboard. The components of waste with high calorific value are especially suitable for energy use, for example: dry paper -17 MJ.kg -1 , rubber -35 MJ.kg -1 , PET bottles -23 MJ.kg -1 , plastic from recycling -25 MJ.kg -1 , plastic foil -42 MJ.kg -1 , hard plastic -34 MJ.kg -1 , mixed textile -20 MJ.kg -1 , dry wood -17 MJ.kg -1, leather, shoes -19 MJ. kg -1 . The advantage of combustion a solid alternative fuel lies mainly in the properties of the resulting material, where the calorific value is comparable to high-quality brown coal, while the composition and quality of the input materials can achieve a calorific value comparable to black coal and at the same time reduce the amount of harmful substances. All methods of incineration of waste from the automotive industry must in all circumstances meet emission limits -gaseous emissions (CO, SO2, NOx, but also, for example, dioxins, furans, unlimited pollutants, etc.) and particulate matters. Municipal and industrial waste incinerators (eg from the automotive industry) are equipped with constantly modernizing incineration technologies:  Combustion on different types of grates,  Incineration in rotary kilns,  Fluidized bed combustion. and flue gas cleaning technologies:  Dry flue gas cleaning method,  Semi-dry flue gas cleaning method,  Wet flue gas cleaning method,  SNCR technology,  DeNOx system and DeDiox system, in a locally and technically suitable combination. The combustion sector has undergone rapid technological development over the last 10 to 15 years. Many of the changes in this sector have been triggered by sector-specific legislation, which has resulted in particular in the reduction of air emissions from individual plants. The development process is continuous, with the development of techniques in this sector that limit costs and at the same time maintain or improve care for the environment. The aim of waste incineration, together with most waste management methods, is to treat waste in such a way as to reduce its volume and hazard, while trapping (and thus concentrating) or decomposing potentially hazardous substances. Incineration processes can also be a means of recovering energy, minerals or chemicals contained in waste. After combustion, gasification has a long history of development, which in the past was most famous for the process of carbonization of biomass in order to obtain charcoal. Gasification is the conversion of a solid by gasification reactions into synthesis gas, which is used energetically either directly by combustion or after purification and transport through a gas pipeline at the destination. Disposal of some types of waste by gasification has several advantages over traditional incineration methods, such as the possibility of partially securing society's energy needs by gasifying less valuable fossil fuels and waste and increasing the energy self-sufficiency of countries without quality fossil fuel resources. Waste gasification is a complex process involving many physical and chemical reactions. Liquid and solid wastes, which contain a significant proportion of bound carbon and less hydrogen, react with a substoichiometric amount of oxidizing agent. Waste gasification takes place in a reducing environment, similar to pyrolysis, but at higher temperatures and by the action of gasifiers, e.g. by the action of an oxygen-vapor mixture. One of the fundamental differences between gasification and pyrolysis is that fixed carbon also passes into the gas phase during gasification. The possibilities of using such technology for energy recovery of waste are promising, the heterogeneous composition of the waste batch is especially problematic. In simple terms, the whole gasification technology can be described as follows: the waste is placed in a gasifier, where a mixture of gases is formed and the second product is ash, slag and metals, after gas purification, the purified gas is used for energy either it is not absolutely necessary to purify the resulting gas mixture). Purification of the gas removes the escape of solids and condensed tar. Parameters that most influence the course of waste gasification:  Reactor temperature -an important operating parameter, the temperature profile in the reactor is a function of several parameters, such as the equivalent ratio (ratio of oxygen volume involved in the oxidation process of thermal treatment of waste and oxygen volume required for complex stoichiometric oxidation), residence time, chemical energy of waste, composition and inlet temperature of the dosed waste, thermal insulation of the reactor, etc.,  Residence time -the residence time of the processed type of waste in the reaction chamber of the gasification plant is influenced by the type and design of the reactor, qualitative properties of the input raw material, temperature in the reaction chamber, waste treatment method, etc.,  Composition and physical parameters of waste -transformation of waste into usable energy is influenced by specific properties of waste, they are mainly moisture content, ash content and volatile matter content, bulk density, grain size, elemental composition, calorific value, contaminant content,  The inlet temperature of the waste material and the outlet temperature of the gasification products -have a significant effect on the mass and energy balance of the reactor,  Operational and performance parameters -it is important to harmonize and determine the right combination of operating conditions to create a certain treatment system for different types of waste. Pyrolysis and its use in the energy recovery of waste plastics is one of the most advantageous methods. Pyrolysis is the heat treatment of waste materials in a pyrolysis furnace or reactor at a temperature of 250 to 1650 °C without access to air, or with limited access of air and at reduced atmospheric pressure. Pyrolysis decomposition results in liquids (pyrolysis oil) and gaseous substances (pyrolysis gas). The material input is waste plastics, which for any reason cannot be further recycled. The resulting product is a fuel, the final quality of which is determined by the quality of the feed to the pyrolysis reactor. Technologies for processing plastic waste into fuel oils have the potential to solve two major problems today -the lack of fossil fuels and the production of non-repairable plastic waste. The process of pyrolysis processing of plastics consists in liquefaction, pyrolysis and catalytic splitting of plastics, in which waste plastics are converted into liquid hydrocarbons suitable as fuel (plastics are converted into "original" material). In this way, it is possible to process almost all plastics that otherwise end up in landfills without recovery. The gases formed during pyrolysis condense in a specially designed condensing system to form aliphatic and cyclic-aliphatic and aromatic hydrocarbons. The resulting mixture essentially corresponds to petroleum distillate. The density as well as other properties of this fuel are similar to those of diesel, the resulting fuel has absolutely the same energy potential, but from the point of view of ecology with significantly lower emissions. The obtained fuel oil can be used as a fuel for internal combustion engines, generators, boilers and industrial burners, or is used as a secondary raw material for the production of benzene, toluene, etc. Approximately 0.9 litres of fuel can be obtained from 1 kg of plastics if polyolefins such as polyethylene (PE) and polypropylene (PP) or polystyrene (PS) are processed. Pyrolysis is a promising technology, the advantage of this energy recovery technology is the fact that it can process a wide range of waste, even polluted, because e.g. most heavy metals pass into solid pyrolysis residues and do not enter the pyrolysis oil or gas and thus the gaseous emissions from combustion. The solid residue from the pyrolysis reactor represents approximately 1/3 of the original weight of dry waste when using municipal waste and 1/10 of the original weight of dry waste when using plastic waste. This solid residue after carbonization falls into a cooling trough, where it is indirectly cooled by process water. It is then transported for sorting on sieves, where the metal residue (ferrous and non-ferrous metals) and mineral fractions (glass, etc.) are separated. The sub-sieve fraction containing carbon is ground to dust and transported to the combustion chamber. From the underside of the combustion chamber, where the temperature is 1300 °C, liquid slag flows into the aqueous granulation bath. Pyrolysis is carried out in pyrolysis chambers or fluidized bed and rotary kilns. The furnaces can be heated from the outside through the furnace shell or from the inside with hot inert gas (nitrogen, ...), to accelerate the pyrolysis, flue gases from the boiler or from the gasifier are fed into the pyrolyzer. The realized pyrolysis units are basically two-stage waste incinerators, the first stage of which is pyrolysis and the second oxidation. Possibilities of energy recovery of waste from the automotive industry in the Slovak Republic at present At present, it is possible in the Slovak Republic to use heat sources for central heat supply for energy recovery of waste from the automotive industry, which are intended for combined heat and power generation, municipal waste incinerators, cement industry and plants that process plastics using pyrolysis. In Slovakia we have two municipal waste incinerators, one operation is in Košice and the other in Bratislava, where is possible to combust all types of waste, the cement industry, where they are well processed for example tires and the heat sources for central heat supply which are in almost every city, where it is justified to burn solid alternative fuel. Pyrolysis is used by only a few small plants and only one larger one is near the town Lučenec. Conclusion There are many methods for energy recovery of waste, but each method has a different degree of reliability and a different degree of suitability for each type of waste. Therefore, our task is to constantly improve these methods and determine their suitability and advantage for individual types of waste.
3,665.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Enumerating partial Latin rectangles This paper deals with distinct computational methods to enumerate the set $\mathrm{PLR}(r,s,n;m)$ of $r \times s$ partial Latin rectangles on $n$ symbols with $m$ non-empty cells. For fixed $r$, $s$, and $n$, we prove that the size of this set is a symmetric polynomial of degree $3m$, and we determine the leading terms (the monomials of degree $3m$ through $3m-9$) using inclusion-exclusion. For $m \leq 13$, exact formulas for these symmetric polynomials are determined using a chromatic polynomial method. Adapting Sade's method for enumerating Latin squares, we compute the exact size of $\mathrm{PLR}(r,s,n;m)$, for all $r \leq s \leq n \leq 7$, and all $r \leq s \leq 6$ when $n=8$. Using an algebraic geometry method together with Burnside's Lemma, we enumerate isomorphism, isotopism, and main classes when $r \leq s \leq n \leq 6$. Numerical results have been cross-checked where possible. Introduction Let [n] := {1, 2, . . . , n}. An r × s partial Latin rectangle L = (l ij ) on the symbol set [n] ∪ {·} is an r × s matrix such that each row and each column has at most one copy of any symbol in [n]. Here, r, s, and n are arbitrary positive integers, and we admit the possibility that n < min{r, s}. If r = s = n, then this constitutes a partial Latin square of order n. The cells containing the symbol · are considered empty, and we say that l ij is undefined. An entry of L is any triple (i, j, l ij ) ∈ [r] × [s] × [n]. The set of all entries of L is called its entry set, which is denoted E(L). The weight of L is its number of non-empty cells, that is, the size of its entry set. Let PLR(r, s, n; m) denote the set of r × s partial Latin rectangles on the symbol set [n] ∪ {·} of weight m and let PLR(r, s, n) = ∪ 0≤m≤rs PLR(r, s, n; m). Let PLS(n; m) = PLR(n, n, n; m) be the set of partial Latin squares of weight m. For m = n 2 , this is the set of Latin squares of order n. For each positive integer t ∈ Z + , let S t denote the symmetric group on the set [t]. The remainder of the paper is organized as follows. The three following sections deal with distinct combinatorial methods that enable us to determine the size of the set PLR(r, s, n; m): (a) Section 2: an inclusion-exclusion method that demonstrates #PLR(r, s, n; m) for fixed m is a symmetric polynomial of degree 3m; (b) Section 3: a chromatic polynomial method that gives exact formulas for this symmetric polynomial, which we compute for m ≤ 13; and (c) Section 4: an adaptation of Sade's method (which efficiently enumerates Latin squares) to partial Latin rectangles, which enables us to determine explicitly the number #PLR(r, s, n; m) for all r ≤ s ≤ n ≤ 7, and all r ≤ s ≤ 6 when n = 8. Section 5 describes an algebraic method for computing #PLR((Θ, π); m) and also the number of isotopisms between two given partial Latin rectangles. In Section 6 we use the Orbit-Stabilizer Theorem and Burnside's Lemma to compute the size of isomorphisms, isotopism and main classes. Section 7 describes the computational results and the implementations of the various methods. In Section 8 we comment how these computational results have been crosschecked in order to ensure their accuracy. A glossary of the most common symbols that are used throughout the paper is shown in Appendix A. To improve the readability of the paper, tables are in Appendix B. Inclusion-exclusion method For fixed m ≥ 1, we find formulas for the size of PLR(r, s, n; m) by modifying the method for enumerating partial orthomorphisms of finite cyclic groups given in [63]. At first, this is a surprising claim, as partial Latin rectangles and partial orthomorphisms are largely unrelated (unless we impose some symmetry, which we don't in the context of this section). The similarity between these two types of objects is that both partial Latin rectangles of weight m and partial orthomorphisms with domain size m are equivalent to non-clashing m-sets of ordered triples (the difference is what constitutes a "clash"). Generalized ordered partial Latin rectangles Let S m = S(r, s, n; m) be the set of sequences e = (e i ) m i=1 , where each e i = (e i [1], e i [2], e i [3]) is a 3-tuple in [r] × [s] × [n]. From any e ∈ S m , we construct an r × s matrix M = M (e) as follows: • We begin with each cell in M containing the empty multiset ∅. • For i ∈ [m], we add symbol e i [3] in the multiset in cell (e i [1], e i [2]). If it turns out that every non-empty multiset in M has cardinality 1 and there are no repeated elements in any row or column of M , then M is essentially a partial Latin rectangle (formally, we need to map ∅ → · and {i} → i). For example, if r = s = n = m = 3 and e = (1, 1, 1), (1,2,3), (2,2,2) , then Thus, sequences in S m are generalized partial Latin rectangles consisting of m ordered entries. Let A m be the subset of S m that gives rise to partial Latin rectangles. Hence, because we can order the entries in a partial Latin rectangle in m! ways. For fixed m, we define To find a formula for f m (r, s, n) by using inclusion-exclusion on the number of "clashes", let which we use to index the possible clashes in e ∈ S m as follows: Any e ∈ S m therefore has a corresponding set of clashes C e ⊆ C m . For any U ⊆ C m , define i.e., the sequences in S m that have the clashes in U (and possibly more clashes), and i.e., the sequences in S m that have precisely those clashes in U (and no more clashes). By definition, Hence, by inclusion-exclusion, When U = ∅, we have |D U | = |A m | and consequently the following lemma. Graph colorings Our next goal is to find an equation for |B V | in terms of the number of vertex colorings of an edgecolored graph, satisfying some additional constraints (neither vertex colorings nor edge colorings are required to be proper in the ordinary sense). Given V ⊆ C m , we define a graph G = G(V ) with an edge coloring δ = δ(V ) by the following process. We start with the null graph on the vertex set [m], and for each [i, j, k] ∈ V : I: If k = 1, then add a red edge between i and j. II: If k = 2, then add a blue edge between i and j. III: If k = 3, then add a green edge between i and j. IV: Replace any parallel edges resulting from I-III with a single black edge. We denote the graph together with its edge coloring generated from V by (G, δ) V . An example of an edge-colored graph generated in this way is given in Figure 1. Sequences e ∈ B V are equivalent to a special type of vertex coloring φ of (G, δ) V , for which we assign to vertex i ∈ [m] the color This coloring satisfies the properties: • If there is a red edge between vertices i and j, then φ 2 (i) = φ 2 (j) and φ 3 (i) = φ 3 (j). • If there is a black edge between vertices i and j, then φ 1 (i) = φ 1 (j), φ 2 (i) = φ 2 (j) and We call such a vertex coloring of (G, δ) V suitable. Conversely, any suitable vertex coloring of (G, δ) V with the vertex color set [r] × [s] × [n] that satisfies the above four properties is a member of B V , thus giving the following lemma. Lemma 2.2. For all V ⊆ C m , the set B V is the set of suitable vertex colorings of (G, δ) V , hence |B V | is the number of suitable proper vertex colorings of (G, δ) V . We can find a simple formula (Lemma 2.3) for the number of suitable colorings of (G, δ) V since each of the three coordinates can be accounted for separately. Let H 1 , H 2 , and H 3 respectively be the graphs formed by deleting the red, blue, and green edges from (G, δ) V , then ignoring the edge colors. For any graph H, let c(H) denote the number of connected components in H. Lemma 2.3. The number of proper suitable colorings of (G, δ) V is Proof. In order to be a suitable vertex coloring, the vertices in each component of H 1 must be assigned colors in G that agree at the first coordinate. We can thus assign the first coordinates of a suitable vertex coloring in r c(H 1 ) ways. Similar claims hold for H 2 and H 3 . We are now ready to make the following fundamental observation about the polynomials f m . This ensures that f m (r, s, n) is a polynomial in variables r, s, n and has integer coefficients. The leading term is (rsn) m , which arises when V = ∅; for all other V ⊆ C m , we see |B V | has degree less than 3m. Finally, to verify that f m (r, s, n) is a symmetric polynomial, we observe that we can permute the colors red, blue, and green (or equivalently, permute the third coordinate of the elements in C m ). Each equivalence class under this action contributes to the sum in Lemma 2.1, which is symmetric. We conclude that f m (r, s, n) is the sum of symmetric polynomials, and is also symmetric. A simplified equation For a 4-edge-colored graph (G, δ), with possible edge colors red, blue, green, and black, let |G| be the number of vertices in G, let |E(G)| be the number of edges in G, and let b(δ) be the number of black edges in δ. There are 4 |b(δ)| sets V ⊆ C m for which (G, δ) = (G, δ) V , since a black edge can be formed in 4 possible ways: (a) when exactly two of properties I, II and III hold, or (b) when all three of properties I, II and III hold. From Lemmas 2.1 and 2.3, we have From here, we use the following identity from [63]: For any (G, δ), we have using the Binomial Theorem. The local variable x counts the number of black edges where I, II and III all hold. This yields the following theorem: Theorem 2.5. For all r, s, n, m ≥ 1, we have The advantage of Theorem 2.5 is that it eliminates the need for accounting for clashes (via the variable V ). Instead, we are now working solely with graphs. For computational purposes, it is easier to work with isomorphism classes of graphs (rather than labeled graphs). We will also account for isolated vertices mathematically. For v ≥ 0 and e ≥ 0, let Γ e,v denote the set of unlabeled e-edge v-vertex graphs without isolated vertices (the set Γ 0,0 contains the empty graph, whereas Γ e,1 = ∅). We can split Theorem 2.5 according to e, v, and Γ e,v to give the following theorem. We obtain the theorem by rearranging this equation. The following corollary follows straightforwardly from Theorem 2.6. Corollary 2.7. For fixed m ≥ 1, the polynomial f m (r, s, n) is divisible by rsn. Furthermore, we use the next result to reduce the required computation. Lemma 2.8. Let G 1 and G 2 be two graphs. Then, 1. if both graphs are disjoint, then, P (G 1 ∪ G 2 ) = rsn P (G 1 )P (G 2 ); and 2. if both graphs meet at a single vertex, then P ( Finally, the following lemma is useful for finding which graphs have to be included when computing the leading terms in f m (r, s, n). Lemma 2.9. For any graph G on v vertices, the degree of (rsn) m−v+1 P (G) in Theorem 2.6 is at most 3m − 2v + 2c(G). Proof. From Theorem 2.6, the degree of (rsn ). Let us show, by induction on the number of edges, that for any 4-edge-coloring δ (with equality when all the edges are red, say). If G has no edges, then we have equality in (2.1). Next, assume ( Chromatic polynomial method Let R r,s be the r × s rook's graph, i.e., the Cartesian product of K r and K s . The graph R 3,4 is drawn in Figure 2. Any partial Latin rectangle in PLR(r, s, n; m) can be interpreted as a proper n-coloring of an m-vertex induced subgraph of R r,s . An example of this correspondence is also given in Figure 2. We can naturally think of (labeled) induced subgraphs of R r,s as (0, 1)-matrices, with a 1 in cell (i, j) whenever vertex (i, j) is present. Under this equivalence, we talk of the rows and columns We act on the set of r × s (0, 1)-matrices by permuting rows and columns. Under this action, we choose representatives from each orbit and call them canonical. After that, we define a function of canonical blocks such that (a) the number of 1's in the blocks sum to m; (b) the number of rows in the blocks sum to ≤ r; and (c) the number of columns in the blocks sum to ≤ s. Given any K ∈ K r,s,m,k , we can arrange the blocks as follows: where ∅ denotes an all-0 submatrix, so that there are r rows and c columns. Call this matrix M (K). If we permute the rows and columns of this matrix, we generate every r × s (0, 1)-matrix M that has {C(M i )} k i=1 = K some number of times, Γ say, by the Orbit-Stabilizer Theorem. It follows from (3.1) that since each K i corresponds to a disjoint component in the induced subgraph of R r,s . Let e row and e col denote the number of non-empty rows and columns in the matrix M (K), respectively. If there are ℓ distinct matrices in the multiset K, let k i , for i ∈ [ℓ], be the number of copies of the i-th distinct matrix. The elements in the stabilizer of M (K) are those which permute the all-0 rows and columns, permute the identical blocks amongst themselves, and stabilize each K i individually. Thus since every K i ∈ K is canonical, To compute #PLR(r, s, n; m) for small m, we thus: • Generate a list of possible blocks K with up to m ones, inequivalent under row and column permutations, and compute, for each block, the size of Aut(G K ), and the chromatic polynomial of K. Table 1 lists the results of this computation for m ≤ 5. • Iterate through each K ∈ ∪ k≥0 K r,s,m,k , computing its contribution to (3.2) from the table generated in the first step. To further reduce the computation, we only store blocks with no more rows than columns. This requires the modification of (3.3) to account for transposing the blocks. By ordering the set of all blocks, a multiset to keep track of which K i we transpose, with 1 meaning "transpose" and 0 meaning "don't transpose". We define ( We choose not to transpose square matrices at this stage, as it adds the task of identifying when the transpose of a matrix can be formed by permuting its rows and columns, hence we have condition (a). Condition (b) prevents overcounting in cases such as 1 0 0 1 0 0 0 1 1 and Thus, (3.2) can be rephrased to give the following theorem. Sade's method Sade's method [55] outstrips all other methods for finding the number of Latin squares [58]. Subsequent authors [9,50,49,70] who found the number of Latin squares of orders n ∈ {8, 9, 10, 11} implemented optimized computerized versions of Sade's method. We generalize Sade's method to partial Latin rectangles: L is isotopic to M , then they can be extended in the same number of ways to (r + 1) × s partial Latin rectangles of a given weight m by adding an (r + 1)-th row. Proof. In the first case, the possible (r + 1)-th rows for L and M are the same. In the second case, if M = L Θ , then the possible (r + 1)-th rows of L Θ are precisely the possible (r + 1)-th rows of L after applying Θ. Let L, M ∈ PLR(r, s, n). We say that L and M are Sade equivalent if L is isotopic to a partial Latin rectangle L ′ such that the columns of L ′ and M have the same sets of symbols. Particularly, L and M must have the same weight. Thus, for instance, the following four partial Latin rectangles in PLR(2, 3, 3; 4) are Sade equivalent. Practically, we need a fast method for checking whether a large number of partial Latin rectangles are Sade equivalent. To this end, for each partial Latin rectangle L ∈ PLR(r, s, n), we perform the following steps, which we illustrate for this example PLR(3, 4, 3; 6): In our running example (4.1), we obtain: 2. Canonically label the graph in a way that preserves the vertex colors (to this end we use nauty [46], for which such a labeling is an internal procedure). In our running example, we obtain: Binary search for sn L ext among the Sade numbers in P LRs[i] 8: if sn L ext is found then Other practical improvements can be made: • In enumerating up to 8 × 8 partial Latin rectangles, the Sade number will be less than 2 64 , which can thus be stored as 64-bit unsigned integers. • When processing the last few rows, we can forgo Sade's method and instead use a simple backtracking algorithm to count the number of extensions up to completions of each partial Latin rectangle. • Although #PLR(r, s, n; m) = #PLR(r, n, s; m), it is significantly faster to compute the value of #PLR(r, s, n; m) when s ≤ n. Algebraic geometry method In this section, we review how sets of partial Latin rectangles are identified with the algebraic sets of certain ideals. This follows the idea of Bayer [10] and Adams and Loustaunau [2] to solve the problem of n-coloring a graph by means of algebraic geometry, since every Latin square of order n is equivalent to an n-colored bipartite graph K n,n [44]. Much more recently, this algebraic method has been adapted to solve sudokus [6,34,56], enumerate quasigroup rings derived from partial Latin squares [20], enumerate partial Latin rectangles that admit a given autotopism [25,26,27,28] or autoparatopism [32], and also the number of isotopisms between two given partial Latin rectangles [30], thereby enabling us to compute in Section 6 the numbers of equivalence classes by means of the Orbit-Stabilizer Theorem and Burnside's Lemma. See [15,43] for more details on algebraic geometry. , for all i ≤ m} . The algebraic set of I is the set of points V(I) := {(a 1 , . . . , a n ) ∈ k n : p(a 1 , . . . , a n ) = 0, for all p ∈ I}. There is a bijection between partial Latin rectangles L = (l ij ) ∈ PLR(r, s, n; m) and elements of the algebraic set of I r,s,n;m : we have l ij = k whenever x ijk = 1, and l ij is undefined otherwise. More specifically: • Having x 2 ijk − x ijk = 0 implies that the algebraic set is contained in {0, 1} rsn . • Having x ijk x i ′ jk = 0 implies that the symbol k does not appear twice in the column j. • Having x ijk x ij ′ k = 0 implies that the symbol k does not to appear twice in the row i. • Having x ijk x ijk ′ = 0 implies that there is at most one symbol in the cell (i, j). • Having i∈[r] j∈[s] k∈[n] x ijk = m implies that the weight of the partial Latin rectangle L is m. Since the algebraic set V(I r,s,n;m ) is finite and the ideal I r,s,n;m ∩ Q[x ijk ] is generated by the polynomial This fact has recently been used in [27] to compute #PLR(r, s, n; m), for all r, s, n ≤ 6 and m ≤ rs. This algebraic geometry enumeration method can be generalized to include cases in which a certain autoparatopism is imposed as follows. A similar approach enables us to determine the set of isotopisms between two partial Latin rectangles as follows. 11 , . . . , x rr , y 11 , . . . , y ss , z 11 , . . . , z nn ] be a polynomial ring in r 2 + s 2 + n 2 variables. The set I(P, Q) of isotopisms between two partial Latin rectangles P = (p ij ) and Q = (q ij ) in PLR(r, s, n) has a natural bijection with the algebraic set of the ideal z ij : j ∈ [n] + x ik y jl (z p ij q kl − 1) : i, k ∈ [r], j, l ∈ [s], such that p ij , q kl ∈ [n] + x ik y jl : i, k ∈ [r], j, l ∈ [s], p ij and/or q kl is undefined . Consequently, the number of isotopisms from P to Q is given by Proof. The first three subideals of I P,Q imply V(I P,Q ) ⊆ {0, 1} r 2 +s 2 +n 2 . Every isotopism Θ = (α, β, γ) ∈ I r,s,n uniquely corresponds to a zero (x Θ 11 , . . . , x Θ rr , y Θ 11 , . . . , y Θ ss , z Θ 11 , . . . , z Θ nn ) in V(I P,Q ), where x Θ ij = 1, (resp., y Θ ij = 1 and z Θ ij = 1) if α(i) = j (resp., β(i) = j and γ(i) = j) and 0, otherwise. Specifically, the fourth and fifth subideals of I P,Q (in the statement of Theorem 5.2) imply α is a permutation of S r (the fourth one ensures the injectivity, while the fifth one ensures surjectivity). Similarly, the next two pairs of subideals imply β and γ are permutations of S s and S n , respectively, and hence, Θ is an isotopism of PLR(r, s, n). The last two subideals imply Θ is a bijection between the entry sets E(P ) and E(Q). Further, since ijk − x ijk ⊆ I P,Q , Seidenberg's Lemma and Theorem 3.7.19 in [43] imply the theorem statement. 6 Counting equivalence classes of partial Latin rectangles Theorem 5.2 can be used to determine not only the size of the autotopism group I(P, P ) of a partial Latin rectangle P , but also that of its autoparatopism group P(P, P ), because #P(P, P ) = π∈S 3 #I(P, P π ). The following result shows how the computation of both values enables us to determine the size of the isotopism and main classes containing P by means of the Orbit-Stabilizer Theorem. Theorem 6.1. Let P ∈ PLR(r, s, n). Then, 1. the number of partial Latin rectangles that are isotopic to P , i.e., the size of the isotopism class containing P , is r! s! n! #I(P, P ) ; 2. the number of partial Latin rectangles that are paratopic to P , i.e., the size of the main class containing P , is #S r,s,n r! s! n! #P(P, P ) ; and 3. the number of isotopism classes in the main class of P is #S r,s,n #I(P, P ) #P(P, P ) . Proof. The first two claims follow from the Orbit-Stabilizer Theorem. For the third claim, we observe that paratopic partial Latin rectangles have autotopism groups of the same size, because Θ is an autotopism of P if and only if Λ −1 ΘΛ is an autotopism of P Λ for any paratopism Λ. They thus also have isotopism classes of the same size, which partition the main class, so the first two claims imply the third. From here on, let Isom(n; m), Isot(r, s, n; m) and MC(r, s, n; m) respectively denote the set of isomorphism classes of PLS(n; m) and the sets of isotopism and main classes of PLR(r, s, n; m). The following result follows straightforwardly from Burnside's Lemma and the fact of acting the isomorphism, isotopism, and paratopism groups on the set of partial Latin rectangles of a given order. #MC(r, s, n; m) = 1 r! s! n! #S r,s,n (Θ,π)∈Pr,s,n #PLR((Θ, π); m). The following result is shown using similar reasoning to that used by Mendis and Wanless in the proof of Theorem 2.2 in [52] for paratopisms of Latin squares. Theorem 6.4. Two paratopisms ((α 1 , α 2 , α 3 ), π 1 ) and ((β 1 , β 2 , β 3 ), π 2 ) in P r,s,n are conjugate if and only if there is a length preserving bijection η from the cycles of π 1 to those of π 2 such that, if η maps a cycle (a 1 , . . . , a k ) to a cycle (b 1 , . . . , b k ), both of them in the symmetric group S 3 , then α a 1 · · · α a k ∼ β b 1 · · · β b k . Proof. If r = s = n, or if π 1 = π 2 , the proof of Theorem 2.2 in [52] suffices the prove the theorem. Otherwise, up to equivalence, we have π 1 = (12) and π 2 = Id and r = s = n. Clearly, η does not exist. The two paratopisms are not conjugate since conjugation in P r,s,n preserves the conjugacy class of the parastrophe permutation. And we similarly prove the other equalities. Theorem 6.4 implies the conjugacy claims. Conjugacy in symmetric groups constitutes an equivalence relation in which each conjugacy class is characterized by the common cycle structure of their elements. Recall that the cycle structure of a permutation π ∈ S m is the expression z π := m d π m · · · 1 d π 1 , where d π i denotes the number of cycles of length i in the unique cycle decomposition of the permutation π. Thus, for instance, the cycle structure of the permutation (12)(345)(78)(9) is 3 1 2 2 1 1 . From here on, we denote the set of cycle structures of the symmetric group S m as CS m . The number of permutations in S m with cycle structure m dm · · · 1 d 1 ∈ CS m m! Given a cycle structure z ∈ CS m , define d z i := d π i for any permutation π ∈ S m with cycle structure z. The next theorem follows straightforwardly from Theorem 6.2, Lemma 6.3 and (6.1). Theorem 6.6. Let r, s, n ≥ 1 and m ≤ rs. Then, 1. the number of isomorphism classes in PLS(n; m) is , and 2. the number of isotopism classes in PLR(r, s, n; m) is #Isot(r, s, n; m) = In practice, it is not necessary perform computations for all possible triples (z 1 , z 2 , z 3 ) ∈ CS r × CS s × CS n to determine the number of isotopism classes in statement 2 of Theorem 6.6. The following lemma gives necessary and sufficient conditions for PLR((Θ, Id)) to contain a non-empty partial Latin rectangle. This generalizes in a natural way a pair of similar results referred to Latin squares [61,Lemma 3.6] and partial Latin squares [25, Lemma 2.2]. Lemma 6.7. Let Θ ∈ I r,s,n be an isotopism of cycle structure (z 1 , z 2 , z 3 ) ∈ CS r × CS s × CS n . The set PLR((Θ, Id)) contains at least one non-empty partial Latin rectangle if and only if there exists a triple (i, j, k) ∈ [r] × [s] × [n] such that the following two conditions are satisfied: 1. lcm(i, j) = lcm(i, k) = lcm(j, k) = lcm(i, j, k), and 2. z 1 has an i-cycle, z 2 has a j-cycle, and z 3 has a k-cycle. Parastrophisms preserve the number of isotopism and main classes of partial Latin rectangles of a given order. Thus, in practice, it is enough to focus on the case r ≤ s ≤ n to determine the number of isotopism classes in PLR(r, s, n; m), whereas the number of main classes splits into three cases: (a) r < s < n; (b) r = s < n; and (c) r = s = n. In (a), the parastrophism group S r,s,n is only formed by the trivial permutation Id ∈ S 3 and hence, the number of main classes coincides with that of isotopism classes. In order to deal with (b) and (c), and keeping in mind Theorem 6.5, let us define the following two sets for each pair of permutations β, γ ∈ S r C 1 (β, γ) := {(α, β ′ , γ ′ ) ∈ I r,r,n : αβ ′ ∼ β and γ ′ ∼ γ}. C 2 (γ) := {(α, β, γ ′ ) ∈ I r,r,r : αβγ ′ ∼ γ}. The next result holds straightforwardly from Theorem 6.2, Lemma 6.3, and (6.1). Theorem 6.8. Let r, n ≥ 1 and m ≤ r 2 . The following statements hold. Computational results Inclusion-exclusion method For small graphs G, Tables 2 and 3 list the polynomial P (G) = P (G; r, s, n) in Theorem 2.6. These polynomials were computed using a C++ program, using geng (packaged with nauty [45,46,48]) to generate a list of isolated-vertex-free non-isomorphic graphs (e.g. "geng -d1 3" generates 3-vertex isomorphism class representatives with minimum degree 1) and bliss [40] to compute their automorphism group size. The notation abc is shorthand for the sum of the monic monomials with variables r, s, and n and exponents a, b, and c. For example, 210 = r 2 s + r 2 n + s 2 r + s 2 n + n 2 r + n 2 s and 2 100 = 2(r + s + n). By Lemma 2.9, substituting the data in Tables 2 and 3 into the formula in Theorem 2.6 gives a formula for f m (r, s, n) containing all terms of degree ≥ 3m−9; unlisted graphs G have v −c(G) ≥ 5, and thus contribute to terms in the polynomial with degree at most 3m − 10. In this regard, the following result generalizes Theorem 4.7 in [27], which only deals with the case m ≤ 6. Tables 2 and 3 contain all graphs with no isolated vertices with up to 5 vertices, and when v ≥ 6, graphs make a zero contribution to Theorem 2.6 since the binomial m v = 0. Chromatic polynomial method We use Theorem 3.1 to compute exact formulas for f m (r, s, n) for all m ≤ 13 which are available from [31]. They corroborate in particular the formulas shown in [27] for m ≤ 6. The authors acknowledge the use of GAP [35], the GAP package GRAPE [57] (which uses nauty), and the Tutte polynomial software tutte bhkk [11] (available from github.com/thorehusfeldt/tutte_bh for these computations. Sade's method We implement Algorithm 1 in C++ using nauty for graph isomorphism and GMP [37] for arbitrary precision arithmetic, which we use to compute #PLR(r, s, n; m) for all r, s, n ≤ 7, and for r, s ≤ 6 when n = 8 (for all 0 ≤ m ≤ rs). Our computations for r, s, n ≤ 6 corroborate Tables 2 through 5 in [27]. The remaining cases are listed here in Tables 4 through 8. Algebraic geometry method We implement Theorem 5.1 in Singular [17] and Minion [36] to determine the values ∆(z 1 , z 2 , z 3 ), for all (z 1 , z 2 , z 3 ) ∈ CS r × CS s × CS n satisfying the conditions of Lemma 6.7, when r, s, n ≤ 6. Theorem 6.6 has then be applied to obtain the corresponding numbers of isomorphism and isotopism classes of partial Latin rectangles, as listed in Tables 9 through 12. The number of main classes of partial Latin rectangles in PLR(r, s, n) according to their weights is given in Tables 13 and 14 when 2 ≤ r ≤ s ≤ n ≤ 6. We include only the cases in which r, s, and n are not pairwise distinct; otherwise, the number of main classes and isotopism classes coincide. Constructive enumeration It is also possible to enumerate constructively the number of isotopism and main classes in PLR(r, s, n; m). We simply extend all representative weight-(m − 1) partial Latin rectangles by one entry in all possible ways, and throw away those that belong to the same class as an already discovered partial Latin rectangle. To compare isotopism and main class equivalence, it is enough, for instance, to generate a graph similar to those proposed in [47]. For fixed m ≥ 1, provided r ≥ m, s ≥ m, and n ≥ m, the number of isotopism classes and main classes in the set PLR(r, s, n; m) do not vary with r, s and n, which amounts to adding empty rows, empty columns, or unused symbols. To compute these numbers, we use the above constructive method, but allow the possibility of introducing new rows, columns, and/or symbols when extending weight-(m − 1) partial Latin rectangles. We perform this enumeration for m ≤ 11, and the results are given in Table 15. The results for main classes is consistent with those independently obtained in [18,69], and moreover, [18] also computes the number of main classes for m = 12. Direct constructive enumeration of isomorphism classes is infeasible, since the numbers grow too quickly. Moreover, isotopic partial Latin rectangles may have different-sized isomorphism classes 1 , so we cannot easily derive the number of isomorphism classes within a isotopism class, which thwarts modifying the approach we use for enumerating isotopism classes to enumerating isomorphism classes. Instead, using an algebraic geometry method like in Section 5, we enumerate isomorphism classes for m ≤ 6 in [30, Table 2]. Verification The authors have made efforts to ensure the numbers and formulas presented here are as bug-free as possible; we document these efforts in this section. First, the various source codes used and their output are available from [31]. Next, where feasible, computations have been independently performed, using different techniques and different software. Where possible, we have also crosschecked the results of the enumeration methods. • The number of isotopism classes and main classes has been computed using both the algebraic geometry method (for r, s, n ≤ 6) and constructive enumeration (for r, s, n ≤ 5). • For m ≤ 13, the results of the computation of #PLR(r, s, n; m) have been cross-checked against the computed polynomials f m (r, s, n). Thus for 854 quadruples (r, s, n; m) the computations agreed exactly. In addition to cross-checking computational results, we check the divisibility of the numbers computed using the following theorem. More specifically, we check the exact formulas for f m (r, s, n), for m ≤ 13, satisfy Theorem 8.1 whenever k ∈ {1, . . . , 10} and r, s, n ∈ {k + 1, . . . , k + 10}.
8,596
2019-08-28T00:00:00.000
[ "Mathematics" ]
Commonsense psychology in human infants and machines Human infants are fascinated by other people. They bring to this fascination a constellation of rich and flexible expectations about the intentions motivating people ’ s actions. Here we test 11-month-old infants and state-of-the-art learning-driven neural-network models on the “ Baby Intuitions Benchmark (BIB), ” a suite of tasks challenging both infants and machines to make high-level predictions about the underlying causes of agents ’ actions. Infants expected agents ’ actions to be directed towards objects, not locations, and infants demonstrated default expectations about agents ’ rationally efficient actions towards goals. The neural-network models failed to capture infants ’ knowledge. Our work provides a comprehensive framework in which to characterize infants ’ commonsense psychology and takes the first step in testing whether human knowledge and human-like artificial intelligence can be built from the foundations cognitive and developmental theories postulate. The early-developing ease with which infants know about people (Gergely, Nádasdy, Csibra, & Bíró, 1995;Woodward, 1998), objects (Spelke, 1990;Stahl & Feigenson, 2015), and places (Hermer & Spelke, 1994) is impressive, especially compared with the difficulties machines have had in achieving these simple human competencies (Lake, Ullman, Tenenbaum, & Gershman, 2017;Marcus & Davis, 2019). Such differences between human and artificial intelligence (AI) are critical to address if we aim to create commonsense AI, leading to AI that we better understand and that better understands us. One of the general challenges of building commonsense AI is deciding what knowledge to start with. A human infant's foundational knowledge is limited, abstract, and reflects our evolutionary inheritance, yet it can accommodate any context or culture in which that infant might develop (Spelke, 2022;Spelke & Kinzler, 2007). If an aim of AI is to build the flexible, commonsense thinker that human adults become, then machines might need to start like adults do, from the same core abilities as infants, whether achieved through learning-driven or engineered approaches (Botvinick et al., 2017). Over the past several decades, foundational research on infants' commonsense psychology, i.e., infants' understanding of the intentions, goals, preferences, and rationality underlying agents' actions, has suggested that infants attribute goals to agents and expect agents to pursue goals in rationally efficient ways (Baillargeon, Scott, & Bian, 2016;Gergely et al., 1995;Spelke, 2022;Woodward, 1998). The predictions that support infants' commonsense psychology are foundational to human social intelligence (Banaji & Gelman, 2013;Jara-Ettinger, Gweon, Schulz, & Tenenbaum, 2016) and could thus inform better commonsense AI, but these predictions are typically missing from machine-learning algorithms, which instead predict actions directly (e. g., churn, clicks, likes, etc.; Griffiths, 2015), and therefore lack flexibility to new contexts and situations. Nevertheless, research on infants' commonsense psychology has not yet been evaluated in a framework that could be directly tested against machines'-let alone built into them-because of non-scalable stimuli, varied task demands, isolated questions, and mixed results. For example, experiments on infants' commonsense psychology have exemplified agents and their actions using various displays, from live human actors reaching for everyday objects (Woodward, 1998), to live puppets with or without animate features like eyes or fur (Johnson, Slaughter, & Carey, 1998), to highly minimal animations of simple shapes navigating in 2D or 3D worlds (Csibra, Bíró, Koós, & Gergely, 2003;Csibra, Gergely, Bíró, Koós, & Brockbank, 1999). These experiments have also typically focused on individual questions of, e.g., goal (Woodward, 1998) or rationality (Gergely et al., 1995) attribution, although some work has probed, for example, how infants' inferences about goals and rationality might combine to support notions of consistency, cost, or value (Liu, Ullman, Tenenbaum, & Spelke, 2017;Scott & Baillargeon, 2013). Different accounts of infants' knowledge about agents have suggested that this knowledge: coheres as a unified set of abstract concepts of causal efficacy, efficiency, goal-directedness, and perceptual access (Spelke, 2022); reflects infants' intuitive understanding of agents' mental states, which direct their efficient actions consistent with their mental states (Baillargeon et al., 2015;Baillargeon et al., 2016); or emerges from individual achievements rooted in infants' own action experience (Woodward, 2009;Woodward, Sommerville, & Guajardo, 2001). From this rich experimental and theoretical tradition thus arises the need for a comprehensive framework in which to characterize infants' knowledge of agents with results on one task comparable with those on another and with results on the suite of tasks comparable across infants and machines. Such a framework can inform both theories of infants' knowledge and the future of human-like AI. Here we take a critical step in addressing this need. We provide a comprehensive framework for testing infants' commonsense psychology by assessing infants' performance on the "Baby Intuitions Benchmark (BIB)," a suite of six tasks probing commonsense psychology. BIB was designed expressly to allow for testing both infant and machine intelligence alike (Gandhi, Stojnic, Lake, & Dillon, 2021), and fulfilling that intention, here we also directly compare the performance of infants and machines, providing an empirical foundation for building human-like AI. For each task, observers first see eight familiarization trial videos in which an agent acts consistently in terms of its goals, rationality, or instrumentality. The exact make-up of the grid world and the movement of the agent may vary across trials, as described in the main text and SI. One example still image per task from a familiarization trial video is shown here. Observers then see expected and unexpected test trial videos (with the order of these trials varying for infants). Example still images of both test trial videos per task are shown here. All of the videos are available at: https://osf.io/r98je/. Importantly, all of BIB's tasks are presentationally consistent, allowing for comparisons across tasks, without concerns of attributing null effects to varying visual, memory, or other task demands. Instead of focusing on one principle of commonsense psychology, moreover, BIB's tasks focus on three possible attributions to agents' actions that an observer could make-goal attribution, rationality attribution, and instrumentality attribution-thereby addressing whether and how such principles of commonsense psychology might cohere. Using BIB's environment (Gandhi et al., 2021), we procedurally generated the video stimuli to test infants and computational models and chose the clearest examples of the particular principles of commonsense psychology targeted by each task (Figs. 1 and S1). The first three tasks focus on an observer's attribution of goals to agents' actions. The Goal-Directed Task captures the idea that agents' goals are directed towards objects, not locations. Observers watch an agent repeatedly move to the same one of two objects in approximately the same location in an unchanging grid world during familiarization. At test, observers may be more surprised when the agent moves to a new object in that grid world after the locations of the two objects switch (Woodward, 1998). The Multi-Agent Task asks whether goals are specific to agents. Observers watch an agent move to the same one of two objects during familiarization in a changing grid world, with both objects appearing in varying locations. At test, observers may be more surprised when the original agent versus a new agent moves to a new object (Buresh & Woodward, 2007;Repacholi & Gopnik, 1997). The Inaccessible-Goal Task asks whether agents might form new goals when their existing goals become unattainable. Observers watch an agent move to the same one of two objects during familiarization in a changing grid world, with both objects appearing in varying locations. At test, the grid world changes again such that the agent's goal object becomes physically inaccessible. Observers may be more surprised when the agent moves to a new object when its prior goal object is accessible versus inaccessible (Luo & Baillargeon, 2007;Scott & Baillargeon, 2013). The next two tasks focus on an observer's attribution of rationality to agents' actions. The Efficient-Agent Task captures the idea that agents act rationally to achieve goals. Observers watch an agent move to an object efficiently around obstacles in an unchanging grid world during familiarization. At test, the object appears in a location that it had appeared during familiarization, but the grid world has changed such that the obstacles that blocked the object are gone or have been replaced with different obstacles (Gergely et al., 1995;. Observers may be more surprised when the agent moves along a familiar but now inefficient path to the object. The Inefficient-Agent Task asks what expectations observers have about agents who initially move inefficiently in a changing grid world. During familiarization, observers watch an agent move along the same paths to an object as the agent in the Efficient-Agent Task, but this time there are no obstacles in the agent's way, so the agent's movements to the object are inefficient. At test, the environment changes as in the Efficient Agent Task. Observers may either be more surprised when the agent continues to move inefficiently to the object or may have no expectations about whether that agent will move efficiently or inefficiently to the object (Gergely et al., 1995). The last task focuses on an observer's attribution of instrumentality to agents' actions. The Instrumental-Action Task captures the idea that agents should only take instrumental actions when necessary. During familiarization, observers watch an agent move first to a key, which it uses to remove a barrier around an object in varying locations, and then to that object. At test, observers may be more surprised when the agent continues to move to the key, instead of directly to the object, when the barrier is no longer blocking the object (Sommerville & Woodward, 2005;Woodward & Sommerville, 2000). All of the stimuli videos are available at: https://osf.io/r98je/, and additional details about each task are included in the SI. BIB's task structure adopts the "violation-of-expectation" lookingtime paradigm often used to test infants (Spelke, 1985;Téglás et al., 2011). Observers see a series of familiarization trials that serve to set up an expectation followed by an expected outcome that is perceptually dissimilar to the familiarization but is conceptually consistent and an unexpected outcome that is perceptually similar to the familiarization but is conceptually surprising. This task structure has been used in recent machine-learning benchmarks focusing on common sense (Piloto, Weinstein, Battaglia, & Botvinick, 2022;Shu et al., 2021;Smith et al., 2019) and is advantageous because it both protects against low-level heuristic-based solutions (Spelke, 1985) and allows for an algorithm's quantitative measure of surprise to be compared with a well-established psychological measure of surprise (Piloto et al., 2022;Stahl & Kibbe, 2022). Infant design and analyses In Experiment 1, we collected infants' responses to two of BIB's six tasks, the Goal-Directed Task and the Efficient-Agent Task. Mixed-model linear regressions with raw looking time as the dependent variable, outcome (expected versus unexpected) as a fixed effect, and participant as a random-effects intercept evaluated infants' performance on each task, and an additional regression examined infants' overall performance across both tasks. To obtain p-values, we ran Type 3 Wald tests on the results of each regression. Experiment 1 focused on these two tasks because the common sense they measured has had consistent findings in the prior literature on infants' action understanding (Baillargeon et al., 2016;Spelke, 2022;Woodward, 2009). Experiment 1 thus aimed to provide initial evidence of infants' commonsense psychology, as elicited by BIB's highly minimal displays, in BIB's fully observable, overhead navigational context, and with BIB's multiple tasks presented to infants online. Experiment 2 followed a preregistered design and analysis plan (https://osf.io/p6kba) with replications of the two tasks in Experiment 1 with several improvements, including: automated trial progression; balancing of the side of the goal object across participants in the Goal-Directed Task; and matching of the test-trial lengths within participants in the Efficient-Agent Task. Infants were tested on these two tasks as well as on BIB's other four tasks outlined above that were not included in Experiment 1. Following Experiment 1, Experiment 2 evaluated infants' performance on each task with planned mixed-model linear regressions and Type 3 Wald tests with raw looking time as the dependent variable, outcome (expected versus unexpected) as a fixed effect, and participant as a random-effects intercept. Additional planned regressions examined infants' overall performance across all six tasks and directly compared their performance on the two tasks focused on agents' rational actions. Infant participants In Experiment 1, typically developing 11-month-old infants (N = 26, M age = 11.13 months, Range = 10.42 months -11.83 months; 12 girls) born at ≥37 weeks gestational age were included. They completed the Goal-Directed Task, the Efficient-Agent Task, or both, with half of the infants receiving each task first, totaling N = 48 individual testing sessions and N = 24 sessions per task. An additional four sessions were excluded because infants did not complete the session. In Experiment 2, typically developing 11-month-old infants (N = 58, M age = 11.06 months, Range = 10.50 months -11.50 months; 31 girls) born at ≥37 weeks gestational age were included. Each infant completed at least one of BIB's tasks, totaling N = 288 individual testing sessions. Following our preregistration, data collection stopped when 32 infants (M age = 11.09 months, Range = 10.50 months -11.50 months; 17 girls) completed all six of BIB's tasks. Tasks were presented in a semirandomized order using 32 fixed orders that averaged to each task being presented 5.33 times in each ordinal position (range: 4-7 times). All included sessions for each task contributed to the analyses reported here. The final sample sizes for each task were: Goal-Directed Task An additional 37 sessions were excluded because of preregistered exclusion criteria, including: looking time < 1.5 s to least one test trial and/or two familiarization trials with or without the infant completing the session (16); poor video quality and/or technical failure (18); and caretaker interference (3). An additional two sessions were excluded post hoc for extreme values (> 40 s) to one test outcome, which could artificially inflate the calculation of the sample's variance. These extreme values were identified through examination of a histogram of the raw looking times across all of the sessions and across all of the tasks by two researchers masked to the task and outcome represented by each value. Exclusions were consistent across tasks: Goal-Directed Task, 5; Multi-Agent Task, 6; Inaccessible-Goal Task, 9; Efficient-Agent Task, 7; Inefficient-Agent Task, 5; Instrumental-Goal Task, 7. The total exclusion rate was 11.9%. Participating families received a $5 Amazon gift card after each testing session and received a bonus gift card of $30 if they completed all six sessions. Prior to participation in session one, we obtained informed consent from the infant's legal guardian, and we confirmed consent before each subsequent session. The use of human participants for this study was approved by the Institutional Review Board on the Use of Human Subjects at our university. Infant procedure Infants were tested online on Zoom. In the first ten minutes of the first testing session, the experimenter explained to caretakers the instructions for setting up their device and for positioning the infant in front of the screen. We asked caretakers to close their eyes and not communicate with the infant during the stimuli presentation. The experimenter, masked to what trial was being presented and the order of the test trials, coded infants' looking to the stimuli live from the start of each video and controlled the progression of stimuli using PyHab (Kominsky, 2019) and slides.com. Each trial video was preceded by a 5 s attention grabber (a swirling blob accompanied by a chiming sound, centered on the screen) to focus the infant's attention to the screen, and each video froze after the agent reached an object. The last frame of the video remained on the screen until infants looked away for 2 s consecutively or for a maximum of 60 s. Testing sessions were recorded through the Zoom recording function, capturing both the infant's face and the screen presenting the stimuli. Following our preregistration, a different researcher, masked to the study outcome, what trial was being presented, and the order of the test trials, recoded 48 randomly chosen sessions (25%) from the 32 infants who completed all six tasks. The reliability between the first and second coder was very high (ICC = 0.98). Infant results Infants' performance on Experiment 1's two tasks is displayed in Fig. 2. Infants' looking time varied by task, with longer looking to the Efficient-Agent versus Goal-Directed Task (F(1, 71) = 9.34, p = .003), reflecting the longer test-trial lengths in the Efficient-Agent Task (see SI). Overall, infants looked longer to the unexpected versus expected outcomes (F(1, 66) = 11.34, p = .001), and there was no task by outcome interaction (F(1, 66) = 0.30, p = .585). Infants were surprised (looked longer) when an agent moved to a new object in the Goal-Directed Task (F (1, 23) = 4.73, p = .040), and they were surprised when an efficient agent later took an inefficient path to an object in the Efficient-Agent Task (F(1, 23) = 2.60, p = .016). We first considered infants' performance on Experiment 2's three tasks that focused on goal attribution: the Goal-Directed; Multi-Agent; and Inaccessible-Goal Tasks. First, consistent with the results in Experiment 1, infants were surprised when an agent moved to a new object in the Goal-Directed Task (F(1, 47) = 4.09, p = .049). Infants presented with a new agent in the Multi-Agent Task, however, did not show a difference in surprise when that agent versus the original agent moved to a new object (F(1, 48) = 3.41, p = .071; with longer looking times to the expected outcome). Infants in the Inaccessible-Goal Task also did not show a difference in surprise when an agent moved to a new object when its goal object was accessible versus inaccessible (F(1, 46) = 0.02, p = .891). We next considered infants' performance on the two tasks that focused on rationality attribution: the Efficient-Agent and Inefficient-Agent Tasks. First, consistent with the results in Experiment 1, infants were surprised when an efficient agent later took an inefficient path to an object in the Efficient-Agent Task (F(1, 46) = 7.72, p = .008). Infants in the Inefficient-Agent Task did not show a difference in surprise when an inefficient agent continued to move inefficiently to an object at test (F(1, 48) = 2.51, p = .119). But, when comparing infants' performance in the Efficient-Agent and Inefficient-Agent Tasks directly, there was no significant task by outcome interaction (F(1, 132) = 0.49, p = .484): We did not find evidence that infants' surprise at the inefficient agent's later inefficient action was different from their surprise at the efficient agent's later inefficient action. Finally, we considered infants instrumentality attribution through their performance on the Instrumental-Action Task. Infants did not show a difference in surprise when the agent moved to the tool as opposed to its goal object when the tool was no longer needed to achieve the goal (F(1, 47) = 0.03, p = .853). Infant discussion Infants' successful performance in the Goal-Directed and Efficient-Agent Tasks in both Experiments 1 and 2 suggest that they expect agents' actions to be goal directed towards objects, not locations, and that they expect agents' goal-directed actions to be rationally efficient. These results also show that infants' common sense about the underlying causes of agents' actions are accessible when testing infants online and are highly abstract: Infants' expectations are elicited by BIB's minimal displays and are generalizable to BIB's novel, overhead navigational context. This latter suggestion is especially striking given infants' success on the Efficient-Agent Task since obstacles in the grid world blocked an agent's direct access to the goal object. Given infants sensitivity to and use of agents' perceptual access to objects when making inferences about agents' actions (Luo & Baillargeon, 2007;Luo & Johnson, 2009), infants evidently appreciated BIB's blocking obstacles as only physical, not perceptual. With BIB's context providing no information that these obstacles limit an agent's perceptual access, infants may have interpreted the obstacles as something that agents could "see over" or "see through." Future studies could explore how infants appreciate the geometric, physical, and perceptual affordances of such overhead navigational environments. Infants' pattern of performance on BIB thus enriches our understanding of their commonsense psychology and raises new questions about the abstract principles that might be inherent to that common sense. Building on questions of infants' sensitivity to agents' physical and perceptual access to objects, future versions of the Goal-Directed Task could reveal how having an agent move around obstacles to a goal object, instead of taking only straight paths-actions providing additional cues to agency (Johnson, Shimizu, & Ok, 2007;Luo & Baillargeon, 2005)-might bolster infants' goal attribution in that task. Introducing significant changes to the arrangement of obstacles across the familiarization and test environments in the Goal-Directed Task, moreover, could explore the effects of context changes on goal attribution Sommerville & Crane, 2009). These latter results might also shed light on infants' failures in some of BIB's other tasks. For example, infants may have failed in the Inaccessible-Goal Task because the arrangement of obstacles changed from familiarization to test, including in a way that affected one object's physical accessibility. Infants may have found a change in the object's accessibility itself surprising, or they may not have generalized the agent's goal to this new test environment with significantly different physical affordances because they interpreted this change as indicating two different places in which the agent was acting (Sommerville & Crane, 2009). The Multi-Agent Task similarly changed the arrangement of obstacles from familiarization to test, although infants may have failed in this task simply because of heightened attention to the new agent, who appeared for the first and only time in the expected outcome (prior studies showing agent-specific goal attribution had presented the new agent in both test outcomes; Buresh & Woodward, 2007). Changes to the affordances of the environment from familiarization to test may also explain the pattern of findings in the Inefficient-Agent Task, which did not differ from the patterns of findings in the Efficient-Agent Task. In particular, previous literature suggests both that infants do not expect an agent who had previously moved inefficiently to later move efficiently when an obstacle present during familiarization is removed from the test environment (Gergely et al., 1995;Skerry, Carey, & Spelke, 2013) and that infants do expect a previously inefficient agent to later move efficiently if the test environment introduces a new obstacle . The changes in the number and location of the obstacles across the Inefficient-Agent Task's familiarization and test environments may have weakly elicited, or elicited in only some infants, this latter, "default" prediction about rationally efficient goal-directed actions for inefficient agents in the Inefficient-Agent Task . Future versions of the Inefficient-Agent Task could thus focus specifically on the effects of different kinds of changes in the context and in the environment's affordances on infants' rationality attribution. Finally, given infants' successes in previous tasks probing their understanding of instrumental actions, infants may have failed in BIB's Instrumental-Action Task because they could not understand the tool object's causal efficacy (Sommerville, Hildebrand, & Crane, 2008) or the agent's ultimate goal. Specifically, prior findings suggesting that infants recognize agents' instrumental actions (e.g., the use of a tool) relied on tools whose causal efficacy was familiar to infants (e.g., pulling a cloth to bring a toy within reach; Piaget, 1953;Sommerville & Woodward, 2005) or on novel tools with which infants were first given direct experience (Sommerville et al., 2008). The tool infants saw in the Instrumental-Action Task was both novel and not something they were given experience with. Future versions of the Instrumental-Action Task might thus introduce state-changes, such as colour changes, to the contacted tools and objects, which, in previous studies, have made the causal efficacy of otherwise novel and inscrutable actions appreciable to young infants (Liu, Brooks, & Spelke, 2019;Skerry et al., 2013). Model design and analyses To examine whether infants' intelligence about agents might be reflected in state-of-the-art machine intelligence, we compared infants' performance on BIB in Experiment 2 to the performance of three learning-driven neural-network models. Following prior work (Gandhi et al., 2021;Rabinowitz et al., 2018), the models formed predictions about an agent's actions at test based on its actions during familiarization. To obtain a continuous measure of surprise as a correlate of infants' looking time, we calculated the models' prediction error for each frame of each outcome and considered the frame with the maximum error. To compare model and infant performance, we then calculated the Z-scored mean surprisal score to each outcome for each model and the Z-scored mean looking time to each outcome for infants. Z-scores were calculated within task. For an unplanned quantitative comparison of the overall similarity between the infants' and each models' performance, we evaluated the root mean squared error (RMSE) across BIB's six tasks using the mean Z-score to the unexpected outcome. We also included a comparison between infants' performance and a "baseline," which we gave a surprisal score of "0" for all tasks. Finally, to confirm that the models' performance on the specific trials presented to infants was representative of their performance more generally and not due to any idiosyncrasies of the particular videos shown to infants, we also evaluated the models' accuracy on BIB's full dataset (Gandhi et al., 2021). Because those results were consistent with the models' performance on the infant videos and with prior work (Gandhi et al., 2021), they are reported in the SI. Model specifications Learning-driven neural network models have accelerated recent advances in AI (Lecun, Bengio, & Hinton, 2015;Rabinowitz et al., 2018), and so we chose to compare such models' performance on BIB to infants'. Approaches like reinforcement learning (Sutton & Barto, 2018) and inverse reinforcement learning (Ng & Russel, 2000), for example, have succeeded in learning to control agents and in understanding the actions of agents, but these approaches cannot be used with BIB because they require privileged information, including the ability to actively control agents in the test environment and, in the case of reinforcement learning, receive a reward. Infants engage with stimuli like BIB's through passive observation, and so we based our modeling on the "Theory of Mind Net (ToMnet)" architecture from Rabinowitz et al. (2018), which is a neural network designed specifically for passive observation that has been shown to make inferences about an agent's underlying mental states from its behavior. With this architecture, we tested three models from two classes: behavioral cloning (BC) and video modeling (Gandhi et al., 2021). The models' schematized architectures are presented in Figs. 3 and S2. Two BC models predicted how an agent would act using the background training as examples of state and action pairs (see Model Training below). To predict the agent's next action in a test trial, BC combined information from the learned features from the previous frame of a test-trial video along with the learned features in the set of familiarization-trial videos. Video modeling used a similar strategy, architecture, and training procedure, but it aimed to predict the entire next frame of the test-trial video rather than just the agent's next action. The two BC models differed in their encoding of the familiarization trials. One BC model relied on a simple multi-layer perceptron (MLP) to encode pairs of states and actions independently (Fig. S2), and the other BC model relied on a more complex, bi-directional recurrent neural network (RNN) to sequentially encode pairs of states and actions (Fig. 3). The states were encoded with a convolutional neural network (CNN), which was pretrained using Augmented Temporal Contrast (ATC) (Stooke, Lee, Abbeel, & Laskin, 2020). Table S1 provides the CNN specifications and the ATC data augmentation details. For both the MLP and RNN encoders, the model obtained a characteristic embedding (Rabinowitz et al., 2018) of an agent by first aggregating the embeddings across frames (using the average for the MLP and the last step for the RNN) for each familiarization trial and by second averaging across familiarization trials. When aggregating frames, the videos were randomly sub-sampled to use up to 30 frames. To predict the future actions of the agent, defined as the continuous change in position based on the video (at 3 frames per second), the models combined the characteristic embedding with the current state of the environment (also encoded with the CNN). See Table S2 for the specifications of the BC models. The one video model sequentially encoded each familiarization trial by passing up to 30 frames through a CNN and then combining them with a bi-directional RNN. The model obtained a characteristic embedding of an agent by averaging the RNN embeddings. The model combined the characteristic embedding with the current state of the environment (specified by the current frame of the video) to predict the next frame of the video (at 3 frames per second) using a U-net architecture (Ronneberger et al., 2015). Model training Prior to being tested, the models were trained on thousands of background examples provided by the BIB dataset (Gandhi et al., 2021) of BIB-like agents exhibiting simple behaviors in a grid world. While the training set included individual components of the test set (e.g., agents' movement to objects, agents' consistent object goals, barriers, tools, etc.; see below), success on the test set required models to flexibly combine representations across the different training tasks. Moreover, since training included only expected outcomes, training with labeled videos was not possible. The training otherwise used the same familiarization/ test task design as the test set. In one training task, an agent moved to one object in varying locations in the grid world. In a second training task, two objects were presented in varying locations in the grid world but always very close to the agent; the agent consistently moved to one of the two objects. In a third training task, the agent moved to one object in varying locations in the grid world; at varying points during the familiarization, that agent was substituted by another agent. Finally, in a fourth training task, a green barrier surrounded an agent and a key; the agent retrieved the key to let itself out of the blocked area to move to an object. We included five runs of each model type with the runs initialized randomly and trained until they converged on the background training. The BC models were trained to minimize mean squared error, and the video model was trained to minimize mean squared error in pixel space. Twenty percent of the background training trials were left out as a validation set, and the models were successful at the validation set in predicting agents' actions on all of the background training tasks, with low prediction errors. For example, the MSE error for the BC models on Fig. 3. Architecture of the video and BC RNN models (Gandhi et al., 2021;Rabinowitz et al., 2018). An agent-characteristic embedding was inferred from the familiarization trials using a recurrent net. This embedding, with a frame from the test trial, was used to predict the next action of the agent in case of the BC model and the next frame of the video using a U-net (Ronneberger, Fischer, & Brox, 2015) in the case of the video model. the validation set was about 0.03 which is 0.8% of the maximum possible prediction error (4.00). The only exception was that the BC RNN model performed an order of magnitude less well compared to the BC MLP model on the training task in which two objects were presented very close to the agent and the agent consistently moved to just one (see SI). Fig. 4 displays the Z-scored means of the models' surprisal scores to the expected and unexpected outcomes for each task (see SI for additional details). The Z-scored means of infants' looking times in the tasks of Experiment 2 are also displayed. Model performance shows little resemblance to infant performance. Model results First, to evaluate machines' goal attribution relative to infants', we compared infants and models on the Goal-Attribution Task. Unlike infants, who attributed to agents goal objects, not goal locations, the models either attributed to agents goal locations (BC MLP) or neither goal objects nor goal locations (BC RNN, video model). Next, to evaluate machines' rationality attribution relative to infants', we compared infants and models on the Efficient-Agent and Inefficient-Agent Tasks. While models attributed rational action to agents in the Efficient-Agent Task (to an even greater degree than did infants), models did not attribute rational action to previously inefficient agents who act in new environments in the Inefficient-Agent Task. Here the models' performance was nearly orthogonal to infants', who did attribute rational action to previously inefficient agents who act in new environments. The comparisons between machine and infant performance on BIB's other three tasks revealed no instances in which the models demonstrated positive predictions about agents' actions missing from infants' predictions. In particular, while infants' may have been relatively more surprised at the appearance of the new agent in the expected outcome of the Multi-Agent Task, as described above, the models did not show a difference in surprise across the two outcomes. In the Inaccessible-Goal Task, the video model did appear to be more surprised when the agent moved to a new object when its goal object was accessible, unlike the infants, but given this model's failure on the Goal-Directed and Multi-Agent Tasks, its performance is unlikely to reflect an understanding of agents' goal-directed actions towards objects. For example, the model may have learned that the obstacles in the grid world block objects and that agents move to objects. This would lead to a lower surprisal score when an agent moved to the one accessible object compared with when it moved to either one of the accessible objects. Similarly, in the Instrumental-Action Task the models seemed to have succeeded where the infants did not, showing greater surprise when the agent moved to the key when it was unnecessary to do so. But, closer investigation of the models' performance shows that this apparent success is limited to test trials in which the green barrier was absent versus present and inconsequential (see SI). A true understanding of instrumental actions would generalize across the presence or absence of the green barrier at test. The models thus did not understand agents' instrumental actions. Finally, the RMSE analysis revealed high values for all infant and model comparisons: BC RNN: 0.319; BC MLP: 0.492; video model: 0.297, suggesting little similarity between infant and model performance. Indeed, these RMSE values were higher than the one obtained by comparing infants' performance to "baseline" surprisal scores of "0" for all tasks: 0.143. Model discussion BIB was expressly designed to allow for testing both infant and machine intelligence alike (Gandhi et al., 2021), providing an empirical foundation for building human-like AI. While the performance of the models tested here has not previously been compared with human performance (let alone with infant performance), and while and models like these are limited in their capacity for flexible generalization to out-of-distribution novel test displays compared with the displays used for their training (a generalization BIB requires and infants excel at), such models have nevertheless accelerated recent advances in AI (Lecun et al., 2015;Rabinowitz et al., 2018). Our comparison reveals that the state-of-the-art "machine theory of mind" captured in such models is indeed missing key principles of commonsense psychology that infants possess. In particular, while infants expect agents' goal-directed actions to be towards objects, not locations, models either have no expectations or expect those actions to be towards locations, not objects. And, while infants expect both previously efficient and inefficient agents to exhibit rational and efficient goal-directed actions towards objects in new environments, models only expect previously efficient agents to act efficiently in new environments. Finally, where we were unable to find any predictions that infants might have about the goals of new agents, about agents' goal objects in new environments, or about novel instrumental actions, models show no additional commonsense psychology. Our approach of directly comparing infant and machine intelligence allows us to specify what principles of commonsense psychology are present in infants yet missing in machines, thereby inspiring new directions in engineering AI. For example, alternative models based on Bayesian inverse planning have been applied successfully to tasks like BIB by making more explicit abstract inferences about mental states (Baker et al., 2017;Baker, Saxe, & Tenenbaum, 2009;Shu et al., 2021). Nevertheless, extending the Bayesian approach to BIB in particular and to videos in general is not straightforward: A video format does not by itself provide the identification of the agents or objects present in the scene (let alone any relations among them). Recent approaches based on inverse reinforcement learning (Sim & Xu, 2019;Yu, Yu, Finn, & Ermon, 2019) could also be promising, but, as reviewed above, they require online, active sampling from the testing environment, and BIB's environment, like much of infants' experience, involves passive viewing. It thus remains an open challenge for learning-driven systems to acquire sufficiently rich, abstract structure from BIB's training to match infant commonsense intelligence. Nevertheless, setting infant common sense as a benchmark for machine common sense promises to give AI the foundations of human intelligence. General discussion BIB includes six highly minimal but presentationally consistent tasks focusing on three high-level principles of commonsense psychology: goal attribution; rationality attribution; and instrumentality attribution. Infants' successes on BIB suggest they have a highly abstract notion of agents' actions as goal-directed towards objects and a principle of rationality that leads to default expectations of agents' efficient actions towards goals. These results are consistent with the rich literature on infants' commonsense psychology (Baillargeon et al., 2015;Baillargeon et al., 2016;Spelke, 2022;Woodward, 2009;Woodward et al., 2001) and synthesize the literature's findings in a unified framework that can be directly compared with-and perhaps built into-machine intelligence. In addition, BIB uniquely reveals that infants appreciate agents' actions in a novel, overhead navigational context, here recognizing obstacles as physical but not perceptual barriers to action. Infants' failures on BIB suggest that changes to the contexts in which goals are first demonstrated may have significant impacts on infants' goal and rationality attribution Sommerville & Crane, 2009). For example, infants may not generalize an agent's goal to a test environment with even minimal or inconsequential changes relative to the environment in which the goal was initially demonstrated if those changes suggest that agents are acting in a new place. Regardless of how infants might come to understand the geometry of BIB's environment, their sensitivity to and use of where an agent is for goal and rationality attribution is apparent. Future studies might thus investigate infants' use of such geometry for recognizing places based on their shape or navigability even before infants can navigate on their own (Deen Fig. 4. Z-scored means of the models' surprisal scores (each model is shown with a different shape and in a different shade of gray) and the Zscored means of infants' looking times (shown in red) to the expected and unexpected outcomes in each of BIB's six tasks in Experiment 2. Models differ from infants in terms of infants' successful goal and rationality attribution (A), and models show no additional commonsense psychology missing from infants' performance (B). (For interpretation of the references to colour in this figure legend, please see the online version.) Kosakowski et al., 2021). Future work exploring infants' knowledge about the world could extend our general approach to investigate other aspects of infant commonsense psychology. Because BIB's tasks are procedurally generated and presentationally consistent, for example, new tasks could easily be incorporated into BIB's dataset. Future studies might explore expectations of agents' notions of cost and value (Jara-Ettinger et al., 2016; or recognition of agents' actions that might signal potential social partnerships (Meltzoff, 2007;Powell & Spelke, 2013;Schachner & Carey, 2013;Tomasello, 2018). While we show that learning-driven neural-network approaches already fall short of infant's common sense on BIB's existing tasks, such expectations will nevertheless become increasingly important for AI too as it becomes further embedded in real-world, multi-agent settings that demand common sense. Extending our approach can ultimately inform comprehensive accounts of infants' knowledge not only about agents, but also about objects (Lin, Stavans, & Baillargeon, 2022;Spelke, 1990;Stahl & Feigenson, 2015) and places (Hermer & Spelke, 1994), allowing us to more fully describe the origins and development of human common sense and provide an avenue for building the future of human-like AI. BIB called for an interanimating research program between developmental cognitive science and artificial intelligence. The present work demonstrates that such a program is both possible and generative for both fields. Our work provides a first step in this productive dialogue between the cognitive and computational sciences to test whether knowledge can be built, in human or machine, from the foundations that cognitive and developmental theories postulate. Credit author statement GS, KG, BL, and MRD conceptualized the study. GS, KG, and SY curated the data. KG and SY analyzed the data. MRD and BL acquired funding and supervised the study. MRD wrote the original draft. GS, KG, SY, and BL reviewed and edited the draft. Data availability Experiment 2 with infants was preregistered on the Open Science Framework (OSF) prior to data collection, and the preregistration is available at https://osf.io/p6kba. The data, code, and materials related to all of the infant testing and the comparison between infant and machine performance are available on the OSF at: https://osf.io/htjc2/. The code related to the model testing is available at: https://github. com/kanishkg/bib-models.
9,573.2
2023-02-16T00:00:00.000
[ "Psychology", "Computer Science" ]
Inhibition of glycosylation induces formation of open connexin-43 cell-to-cell channels and phosphorylation and triton X-100 insolubility of connexin-43. We transfected the cDNA for the cell-to-cell channel protein connexin-43 (Cx43) into Morris hepatoma H5123 cells, which express little Cx43 and lack gap junctional communication (open cell-to-cell channels). We found that cells overexpressing Cx43 nonetheless lacked open cell-to-cell channels, but that inhibition of glycosylation by tunicamycin induced open channels in these cells. Tunicamycin also induced biochemical changes in Cx43 protein; the level increased, and a considerable fraction became phosphorylated and Triton X-100 insoluble, in contrast to untreated cells where Cx43 was non-phosphorylated and Triton X-100 soluble. Although tunicamycin caused the formation of open channels, channels were not found aggregated into gap junctional plaques, as they are when they have been induced by elevation of intracellular cAMP. The results suggest that although Cx43 itself is not glycosylated, other glycosylated proteins influence Cx43 posttranslational modification and the formation of Cx43 cell-to-cell channels. Cell-to-cell channels mediate intercellular communication by providing a direct pathway for the exchange of molecules up to 1-2 kDa (Schwarzmann et al., 1981). The molecules transferred include signaling molecules, which may play important roles in tissue homeostasis (Loewenstein, 1981), cell growth control (Loewenstein and Rose, 1992), and embryonic development (Warner, 1992). The channels are known to cluster into often quite large aggregates, forming the so-called gap junctions. In electron microscopic images of freeze-fractured gap junctions, the channels show up as particles of uniform size that span the two adjoining membranes (Kreutziger, 1968;Goodenough and Revel, 1970). The channels are formed from membrane proteins called connexins Kumar and Gilula, 1992). More than a dozen different connexins have been identified in vertebrates, all of which share similar topology and amino acid sequence (Kumar and Gilula, 1992). One well studied and widely expressed connexin is connexin-43 (Cx43), 1 found in the heart (Beyer et al., 1987) and many other tissues (Beyer et al., 1987;Micevych and Abelson, 1991) as well as in a number of established cell lines (Musil et al., 1990;Mehta et al., 1992). Progress has been made in understanding how cell-to-cell channels are formed. For example, it is now known that after synthesis, Cx43 (and probably the other connexins, too, see Rahman et al. (1993)) is first assembled into multimeres called connexons or hemichannels in the Golgi (Musil and Goodenough, 1993); the hemichannels are then transported to the plasma membrane where they find counterparts on the adjoining cell's membrane to form the cell-to-cell channels proper. Yet, little is known about how the cell-to-cell channels are concentrated into the gap junction plaques. In some cells, a phosphorylated and Triton X-100 insoluble form of Cx43, but not its non-phosphorylated and Triton X-100 soluble form, is localized in the gap junctional plaques (Musil and Goodenough, 1991). It is not clear whether Cx43 becomes Triton X-100 insoluble as a consequence of hemichannel interlocking, i.e. channel formation, or as a consequence of the clustering into gap junction plaques. Neither is it clear what role phosphorylation has in Cx43 channel or gap junction formation. We have found previously that inhibition of glycosylation by tunicamycin (Tm) greatly increases the formation of open channels in a variety of Cx43-expressing cells (Wang and Mehta, 1995). In the present study, we investigate whether any biochemical changes in Cx43 are associated with Tm-induced channel formation. For this, we constitutively expressed Cx43 in a cell line that lacks open cell-to-cell channels and studied the effects of tunicamycin on cell communication via Cx43 channels and on phosphorylation, Triton X-100 solubility, and cellular localization of Cx43. EXPERIMENTAL PROCEDURES Materials-All culture media were from Life Technologies, Inc., fetal bovine serum was from Hyclone Laboratories (Logan, UT), Lucifer Yellow CH was from Molecular Probes, and forskolin was from Calbiochem. Phosphodiesterase inhibitor Ro-20-1724 was a gift from Dr. P. Sorter (Hoffman-LaRoche). Rhodamine-or fluorescein isothiocyanatelabeled goat antirabbit IgG and alkaline phosphatase were from Boehringer Mannheim; rhodamine-labeled lectins, wheat germ agglutinin, and Dolichos biflorus agglutinin were from EY laboratory (San Mateo, CA); all other reagents (molecular biology grade or highest purity) were from Sigma. Cell Culture-A subclone (MHD1) of Morris hepatoma H5123 cells (Borek et al., 1969) was isolated based on its communication-enhancement response to elevation of cAMP. Cells were grown as described previously (Wang and Mehta, 1995). Transfection of Cells-The expression construct (pSGRcx43A) was made by inserting cx43 cDNA from rat heart (Beyer et al., 1987) into the BamHI site of plasmid pSG5 (Stratagene) as described in Mehta et al. (1991). In pSGRcx43A, cx43 expression is driven by the SV40 promoter. Subconfluent cells were harvested and cotransfected by electroporation with pSGRcx43A and the expression vector for the geneticin resistance * This work was supported by National Institutes of Health Grant CA 14464-21 (Y. W. and B. R.), a grant from the American Cancer Society (Florida division), and the Sylvester Comprehensive Cancer Center (to P. P. M.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. § Current address: Laboratory of Cell Communication, Marine Biological Laboratory, Woods Hole, MA 02543. Treatments-Cells were seeded at 2 ϫ 10 5 /35-mm dish. 2 days later, when near confluent, they were treated for experiments by replacing their medium with fresh medium containing the relevant drug at the desired concentration. Forskolin, Ro-20 -1724, and tunicamycin were added from stock solutions in dimethyl sulfoxide (Me 2 SO), with final Me 2 SO not exceeding 0.4%, a concentration which did not affect any of the parameters we measured. For controls, cultures received fresh medium containing 0.4% Me 2 SO. Cell-Cell Transfer of Lucifer Yellow-Micro-injection of fluorescent dye Lucifer Yellow was performed as described (Wang and Mehta, 1995). The number of fluorescent cells (excluding the injected one) was noted 5 min after injection. Immunostaining and Lectin Binding to Cell Surface-Immunostaining with affinity-purified anti-Cx43 antibody and surface binding of rhodamine-labeled lectins were performed as described (Wang and Mehta, 1995). Stained cells were viewed on a Nikon Diaphot fluorescence microscope with a 100ϫ oil immersion objective (for Cx43 immunostaining) or 40ϫ objective (for lectin binding). Images were photographed or captured on an optical disk (Panasonic model TQ-2026F) with an SIT66 (DAGE MTI) video camera and reproduced on a video printer (Hitachi). Western Blot-Lysis of cells, protein separation by SDS-PAGE, and Western blot analysis for Cx43 were performed as described (Wang and Mehta, 1995). The protein concentration was determined with the Pierce BCA protein assay. Dephosphorylation of Cx43 by Alkaline Phosphatase-After appropriate treatment, cells were lysed in alkaline phosphatase buffer (100 mM Tris, pH 8.0, 100 mM NaCl, 5 mM MgCl 2 ) plus 2 mM PMSF and 0.6% SDS. The lysates were boiled, sheared with 27-gauge needles, and diluted with 4 volumes of alkaline phosphatase buffer. Half of the sample was treated with alkaline phosphatase (200 units/ml sample) at 37°C for 4 h, and the other half was incubated untreated. The reaction was terminated by adding Laemmli sample buffer and boiling for 5 min. Phosphatase-treated and untreated samples were separated by SDS-PAGE and analyzed by Western blot. Separation of Triton X-100 Soluble and Insoluble Fractions-The separation of Triton X-100 soluble and insoluble material was done essentially according to the method of Musil and Goodenough (1991). After appropriate treatment, cells from 6-cm dishes were scraped into 4 ml of phosphate-buffered saline containing 2 mM PMSF, 10 mM NaF, 10 mM NEM. Cells were then spun down, resuspended in 1 ml lysis buffer (5 mM Tris base, 5 mM EGTA, 5 mM EDTA plus 2 mM PMSF, 10 mM NaF, 10 mM NEM), incubated at 4°C for 10 min, and disrupted by passing through a 25-gauge needle 25-30 times. The resulting cell lysates were brought to isotonicity by addition of 100 l of 10 ϫ phosphate-buffered saline. Then 10% Triton X-100 was added to a final concentration of 1% (v/v). The lysates were incubated at 4°C for 30 min. One-third of the samples (400 l) was saved as total cell lysate, and the rest (800 l) was centrifuged at 100,000 ϫ g for 50 min at 4°C. After centrifugation, the supernatant (Triton X-100 soluble fraction) was carefully removed. 4 ϫ Laemmli buffer was added to it and the total cell lysate to a final 1ϫ concentration. The pellet (Triton X-100 insoluble fraction) was resuspended with 1067 l of lysis buffer containing 2 mM PMSF, 1 ϫ phosphate-buffered saline, 1% Triton X-100, and 1 ϫ Laemmli buffer. Equal volumes of total, Triton X-100 soluble, and insoluble proteins were separated by SDS-PAGE, and Cx43 was analyzed by Western blot. RESULTS Overexpression of Cx43-We used Morris hepatoma H5123 cells (Borek et al., 1969), which express a low level of Cx43 mRNA but neither Cx26 nor Cx32 mRNA (Mehta et al., 1992). We transfected a subclone of Morris hepatoma cells, MHD1 (see "Experimental Procedures"), with cx43 cDNA from rat heart (Beyer et al., 1987). Several overexpressing clones were obtained, and three of them (MHD1-43A, -B, and -C) were used in this study. They all gave the same results. The expression of Cx43 protein in MHD1 cells and in one Cx43 overexpressing clone, MHD1-43A, is shown in Fig. 1A. The MHD1-43A cells express Cx43 abundantly, manyfold higher than the parental MHD1 cells. And like in the parental cells, Cx43 in the MHD1-43A cells is mainly in the non-phos- (Borek et al., 1969) and so do the cells of the subclone MHD1 (Wang and Mehta, 1995). We tested junctional communication in three clones of Cx43 overexpressing cells by micro-injecting the channel-permeable fluorescent tracer, Lucifer Yellow. Despite their high level of Cx43 protein, the overexpressors have few open channels; in most cases, the tracer remained confined to the injected cells. A typical example for MHD1-43A cells is shown in Fig. 2A (see also Fig. 2B). Inhibition of Glycosylation by Tunicamycin Reduces Surface Carbohydrates and Induces the Formation of Open Channels- Surface carbohydrates may have an inhibitory effect on channel formation (Lin and Levitan, 1987;Levine et al., 1991), and a reduction of surface carbohydrates by inhibition of glycosylation has been found to correlate with increased channel formation in several cell types (Wang and Mehta, 1995). We examined the effect of a glycosylation inhibitor, tunicamycin (Tm) on the abundance of surface carbohydrates and on communication in the Cx43 overexpressors. As in parental MHD1 cells (Wang and Mehta, 1995), Tm greatly reduced surface carbohydrates as detected by lectin binding to the cell surface (Fig. 3). Concomitant with this decrease, there was a dramatic increase in communication; after an 8-h tunicamycin treatment, Lucifer Yellow consistently transferred to more than 20 neighboring cells ( Fig. 2A, panel d, and 2B). The time course of the communication increase in clones MHD1-43A, -B, and -C is shown in Fig. 2B. After a 2-h delay, communication rose steadily to a maximum at 8 -10 h. This result is different from that obtained with the parental MHD1 cells where tunicamycin per se failed to induce communication (Wang and Mehta, 1995). Tunicamycin has been reported to inhibit protein synthesis (Elbein, 1987), and inhibition of general protein synthesis may somehow increase communication (Azarnia et al., 1981). To test whether tunicamycin's effect on communication could be due to general inhibition of protein synthesis, we treated MHD1-43A cells with the protein synthesis inhibitor, cycloheximide. As seen in Fig. 2B, communication changed little, showing that the effect of tunicamycin cannot be explained by general inhibition of protein synthesis. Tunicamycin Treatment Induces Cx43 Phosphorylation and an Increase in Total Cx43-To find out whether the Tm-induced formation of open channels is associated with any biochemical changes in Cx43, we compared the protein from treated and untreated MHD1-43A cells in Western blots. One dramatic change was the appearance of Cx43 of higher molecular mass in Tm-treated cells (Fig. 1C), representing phosphorylated Cx43; it disappeared when cell lysates were treated with alkaline phosphatase (Fig. 1D). A second prominent change was an increase in total Cx43 level (Fig. 1C). In con-trast, no such changes were seen in Cx43 from MHD1 cells after tunicamycin treatment (Fig. 1B). Tunicamycin Treatment Induces Triton X-100 Insoluble Cx43- Musil and Goodenough (1991) have shown that in some communication-incompetent cells, Cx43 is mainly non-phosphorylated and Triton X-100 soluble. When these cells were made communication-competent by expression of cell-cell adhesion molecules, a form of Cx43 appeared that was phosphorylated and Triton X-100 insoluble. Since tunicamycin treatment induced communication and phosphorylation of Cx43 in Cx43-overexpressing cells but not in the parental MHD1 cells, we examined Cx43 Triton X-100 solubility in both cells before and after tunicamycin treatment to see whether communication correlated with Cx43 phosphorylation and Triton X-100 insoluble Cx43. Tunicamycin Treatment Does Not Induce Channel Clustering into Gap Junctional Plaques-In immunostaining of communication-competent cells for cell-to-cell channel proteins, punctate staining is seen at cell-cell contacts, representing the aggregated channels in gap junction plaques (Dermietzel et al., 1987;Beyer et al., 1990;Musil and Goodenough, 1991). No such punctate Cx43 staining was detected in untreated MHD1 or Cx43 overexpressing cells (Fig. 6, a, d, and g). Tunicamycin did not induce punctate Cx43 plaque staining in MHD1 cells (Fig. 6b) nor to a significant extent in MHD1-43A or MHD1-43B cells; the Cx43 was still diffusely distributed except that very fine dots were occasionally seen, including on top of the cells where no cell-cell contacts could be detected under the light microscope (Fig. 6, e and h). The lack of punctate staining is not due to an intrinsic incapability of the cells to cluster cell-to-cell channels into gap junction plaques; bright Cx43 plaque staining appeared in both MHD1 and overexpressor cells after an 8-h forskolin treatment (Fig. 6, c, f, and i). This staining is much stronger and more abundant in overexpressor cells than in parental MHD1 cells, consistent with the much higher level of Cx43 protein (Fig. 1A) and of forskolin-induced communication (data not shown) in the overexpressor cells. DISCUSSION In the present study, we transfected cx43 cDNA into cells that lack open cell-to-cell channels and express little Cx43 and found that cells which overexpressed Cx43 nonetheless had few open channels. The lack of open channels in the Cx43 overexpressors seems not due to any null mutation in the cx43 cDNA. The ability of cDNA-derived Cx43 to form open cell-to-cell channels is clearly evident from the difference in communication between the parental MHD1 and the overexpressor cells after tunicamycin treatment; extensive cell-to-cell transfer of tracer was induced in Cx43 overexpressing cells (Fig. 2) but not in parental MHD1 cells (Wang and Mehta, 1995). Instead, the failure of the exogenous Cx43 to make open channels points to some cellular condition non-permissive for Cx43 to make open channels, and inhibition of glycosylation remedies this condition, allowing channel formation. Due to the lack of potential glycosylation sites in their extracellular loops, connexins are unlikely to be glycosylated. Connexin-32 is known not to be glycosylated (Hertzberg and Gilula, 1979;Rahman et al., 1993), and our result of a lack of decrease in apparent molecular mass after tunicamycin treatment confirmed that Cx43 is not glycosylated either. Therefore, a reduction of carbohydrates from cell surface proteins other than Cx43 is a more likely cause for the observed increase in communication. From a priori considerations, glycoproteins on the cell surface can be expected to impose an inhibitory effect on the formation of cell-to-cell channels and gap junctions (Peracchia, 1985;Abney et al., 1987). It was shown previously that lectins induced or fostered intercellular communication in Aplysia neurons (Lin and Levitan, 1987) or in Xenopus oocytes (Levine et al., 1991), presumably by removing bulky glycoprotein from the plasma membrane, and we have shown that inhibition of glycosylation increased intercellular communication in a variety of mammalian cells (Wang and Mehta, 1995). Carbohydrates may interfere with any one of the steps occurring on the membrane during the formation of open cell-tocell channels and thereby result in decreased communication. The extracellular domain of connexins is no larger than 8 -10 Å, smaller than that of many membrane glycoproteins. One possibility therefore is that large membrane glycoproteins interfere with hemichannel interlocking by hindering two adjoining plasma membranes to come close enough to allow hemichannel interaction. Little is known about how the hemichannels get to the cell-cell contact sites and how channels become concentrated in the junctional plaques. It is possible that hemichannels are transported onto the plasma membrane at random sites and then laterally diffuse to the cell-cell contacts; or, they could be directly inserted into these sites. Bulky membrane glycoproteins may impede the lateral movement of hemichannels on the plasma membrane, or the insertion of hemichannels into the plasma membrane at random or specific sites may involve some glycoprotein(s). Yet another possibility is that even when channels are formed, bulky surface carbohydrates in the immediate vicinity of channels produce some condition unfavorable for the channels to be in the open state. Although tunicamycin treatment elevated the total Cx43 level in Cx43 overexpressors, it is unlikely that this caused the dramatic rise in communication. Instead, the greater total Cx43 protein may reflect a higher stability of Cx43 protein in channels than in hemichannels. We therefore interpret the increase in Cx43 to be the consequence of hemichannel interlocking rather than the cause of it. In agreement with this interpretation is that in the parental MHD1 cells, where tunicamycin did not induce channel formation, the Cx43 protein level did not rise; in fact, it was slightly diminished. This is consistent with the reported inhibition of protein synthesis by tunicamycin in other cells (Elbein, 1987). One unexpected result of this study is that tunicamycin treatment induced Cx43 phosphorylation in the Cx43 overexpressor cells, raising the possibility that tunicamycin activates a kinase. Activation of protein kinase A up-regulates junctional communication and induces Cx43 phosphorylation in a variety of cells, including MHD1 cells (Wang and Mehta, 1995). But because protein kinase A activation has other effects not seen with tunicamycin treatment, e.g. stimulation of Cx43 transcription (Wang and Mehta, 1995), it is unlikely that tunicamycin activates protein kinase A. It is even less likely that tunicamycin activates protein kinase C or a tyrosine kinase because, although these kinases cause Cx43 phosphorylation, they inhibit communication (Crow et al., 1990;Filson et al., 1990;Brisette et al., 1991;Berthoud et al., 1992;Kanemitsu and Lau, 1993), whereas tunicamycin increases communication. It is therefore unclear how Cx43 gets phosphorylated after tunicamycin treatment. Cx43 hemichannels are probably not phosphorylated (Musil and Goodenough, 1993) and are in a closed conformation. When they interlock, they must undergo a conformation change, which enables them to switch to an open state. It is possible that after the conformation change, Cx43 becomes a substrate of an unidentified, constitutively active kinase and thus gets phosphorylated. The function of this Cx43 phosphorylation is not clear. There is no evidence that phosphorylation is a prerequisite for channels to open. Musil and Goodenough (1991) showed in several cell lines that a certain form of phosphorylated and Triton X-100 insoluble Cx43 is correlated with its localization at the junctional plaques. We found this to be true also in MHD1 cells, where forskolin induced the appearance of junctional plaques concurrently with Cx43 phosphorylation and Triton X-100 insolubility. However, we noticed differences. In the cells used by Musil and Goodenough (1991), the Triton X-100 insoluble fraction contained primarily phosphorylated Cx43, while in MHD1 and Cx43 overexpressors, the Triton-insoluble fraction contained additionally a substantial amount of non-phosphorylated Cx43. Another difference is that in the overexpressor cells, after tunicamycin had induced functional channels, phosphorylated and Triton X-100 insoluble Cx43 was found, but junctional plaques were not seen. The latter result would imply that Cx43 phosphorylation and Triton X-100 insolubility correlate better with hemichannel interlocking or cell-to-cell channel formation than with channel clustering into gap junction plaques, at least not with plaques large enough to be resolved by immunostaining. The clustering of channels into plaques that are detectable by immunostaining seems to require elevation of cAMP, both in the MHD1 and the Cx43 overexpressor cells.
4,632.6
1995-11-03T00:00:00.000
[ "Biology", "Chemistry" ]
FedSGDCOVID: Federated SGD COVID-19 Detection under Local Differential Privacy Using Chest X-ray Images and Symptom Information Coronavirus (COVID-19) has created an unprecedented global crisis because of its detrimental effect on the global economy and health. COVID-19 cases have been rapidly increasing, with no sign of stopping. As a result, test kits and accurate detection models are in short supply. Early identification of COVID-19 patients will help decrease the infection rate. Thus, developing an automatic algorithm that enables the early detection of COVID-19 is essential. Moreover, patient data are sensitive, and they must be protected to prevent malicious attackers from revealing information through model updates and reconstruction. In this study, we presented a higher privacy-preserving federated learning system for COVID-19 detection without sharing data among data owners. First, we constructed a federated learning system using chest X-ray images and symptom information. The purpose is to develop a decentralized model across multiple hospitals without sharing data. We found that adding the spatial pyramid pooling to a 2D convolutional neural network improves the accuracy of chest X-ray images. Second, we explored that the accuracy of federated learning for COVID-19 identification reduces significantly for non-independent and identically distributed (Non-IID) data. We then proposed a strategy to improve the model’s accuracy on Non-IID data by increasing the total number of clients, parallelism (client-fraction), and computation per client. Finally, for our federated learning model, we applied a differential privacy stochastic gradient descent (DP-SGD) to improve the privacy of patient data. We also proposed a strategy to maintain the robustness of federated learning to ensure the security and accuracy of the model. Introduction The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes coronavirus disease 2019 . It has spread worldwide, resulting in the ongoing 2022 pandemic. With more than 400 million confirmed cases and five million deaths across nearly 223 countries, COVID-19 is continuing to spread around the world, and there is no sign of it stopping. Thus, it has led to a problematic situation for humans in the world until now. Although COVID-19 vaccines have provided an opportunity to slow the spread of the virus and end the pandemic, not enough COVID-19 vaccines are available for everyone in the world to be inoculated until the end of 2024 at the earliest, according to the chief executive of the world's largest vaccine manufacturer [1]. Moreover, the emergence of COVID-19 virus variants may make the virus more infectious [2] or more capable of causing severe disease [3]. The symptoms of COVID-19 often include fever, chills, dry cough, and systemic pain [4,5]. However, many people are infected with the virus without noticeable symptoms [6,7]. Thus, COVID-19 infection becomes difficult to diagnose. In addition, if the patient is detected early, the disease will be cured more quickly, limiting the spread of the disease. Therefore, it is critical to find a method to assist hospitals in the early diagnosis of COVID-19 patients. Many researchers have applied artificial intelligence (AI) technology to develop COVID-19 detection models to assist hospitals in detecting patients early. Some of them identified COVID-19 cases based on longitudinal information on patient symptom profiles [8][9][10] and achieved promising results in COVID-19 early detection. Other researchers focused on chest radiography images because most COVID-19 cases display common features on chest radiographs, including early ground-glass opacity and late-stage pulmonary consolidation. It can also be identified through a rounded morphology and a peripheral lung distribution [11,12]. In 2020, research published in the Journal Radiology [13] demonstrated that chest radiography outperformed laboratory testing in detecting coronavirus. Therefore, using chest radiography image analysis can help to screen suspected COVID-19 cases at an early stage. In particular, patient's symptoms and chest X-ray images have various advantages including high accessibility, affordability, ease of operation, and rapidly prioritizing COVID-19 suspected patients. Most studies use symptom information [8][9][10], chest X-rays radiography (CXR) [14], chest computed tomography (CT) [15], and lung ultrasound (LUS) [16] as screening methods. These methods heavily rely on shared datasets for the training process. However, based on general data protection regulations [17], patient data privacy must be protected to avoid an attack from malicious attackers because data privacy directly impacts human politics, businesses, security, health, and finances, etc. Therefore, we must find a better way so that machine learning can work collaboratively while maintaining data privacy. One recent method that addresses this problem is federated learning (FL), proposed by Google [18]. Its main idea is to develop a decentralized machine learning model based on datasets from multiple data sources without sharing data. The model updates focus more on the learning task than raw data, and the server only must hold individual updates ephemerally. Therefore, FL offers significant privacy improvements compared to centralizing all training data. Several researchers have applied FL for COVID-19 detection tasks and achieved promising results [19][20][21]. However, some studies have demonstrated that FL may not always provide sufficient privacy guarantees. The sensitive information can still be revealed through model updates [22,23]. For example, Phong [24] demonstrated that the local data information can be revealed from a small portion of gradients, or a possible scenario is that the malicious attacker can reconstruct the training data from gradient information in a few iterations [25]. Unlike existing methods, we did not build a traditional FL system. In this study, we proposed an FL model for COVID-19 detection with higher privacy by adding the differential privacy stochastic gradient descent (DP-SGD) that are resilient to adaptive attacks auxiliary information. We also evaluated the parameters to keep the robustness of FL to ensure the model's security and accuracy. In summary, this study makes the following contributions: • We proposed a higher privacy-preserving FL model for COVID-19 detection based on symptom information and chest X-ray images collected from multiple sources (that is, hospitals) without sharing data among data owners by adding the differential privacy stochastic gradient descent (DP-SGD) resilient to adaptive attacks auxiliary information; • We observed that adding the spatial pyramid pooling (SPP) layer in 2D convolutional neural networks (CNNs) achieve better accuracy on chest X-ray images; • We demonstrated that the accuracy of FL for COVID-19 detection reduces significantly for Non-IID data owing to the varying size and distribution of local datasets among different clients. We thoroughly analyzed several design choices (for example, the total number of clients, amount of multi-client parallelism, and computations per client) to improve the model's accuracy with Non-IID data; • We provided a strategy to keep the robustness of our privacy-preserving FL model to ensure the model's security and accuracy by keeping the fraction of the model constant, scaling up the total number of clients and noise proportionally. In the remainder of this study, we first review related work in Section 2. We then present our approach in Section 3. Section 4 presents the experimental results, and we finally conclude the study in Section 5. Related Works Many researchers have developed various COVID-19 detection models to help hospitals detect patients early. Most researchers have focused on identifying COVID-19 cases based on chest X-ray and CT images. Horry [14] explored transfer learning for COVID-19 detection using three kinds of medical images (X-ray, ultrasound, and CT scan). Through the comparative study of several popular CNN models, the VGG19 model performed multiple levels of COVID-19 detection for all three lung image models. Afshar presented a capsule framework (COVID-CAPS) to identify COVID-19 cases from chest X-ray images [26]. They demonstrated that the COVID-CAPS outperformed the traditional model. Mukherjee proposed a CNN tailored Deep Neural Network (DNN) algorithm to identify COVID-19 cases using chest X-ray and CXR images [27]. They demonstrated that their model outperformed the other models such as InceptionV3, MobileNet, and ResNet. Other researchers have used COVID-19 patients' symptom data. Otoom [28] proposed a real-time COVID-19 detection, treatment, and monitoring system. They used an Internet of Things (IoT) framework to collect real-time symptom data from users, and then identified suspected coronavirus cases and administered appropriate treatment during quarantine. They evaluated the framework's performance using eight algorithms(support vector machine, neural network, Naïve Bayes, K-Nearest Neighbor, decision table, decision stump, OneR, and ZeroR), five of which achieved an accuracy of more than 90%. Akib Mohi [29] presented a COVID-19 classification system using textual clinical reports. They extracted features using several feature extraction techniques such as bag of words, term frequency/inverse document frequency, and report length, and then used these features as input to traditional and ensemble machine learning classifiers. Their experiments showed that logistic regression and multinomial Naive Bayes achieved better accuracy than other methods. Khaloufi [30] proposed a preliminary diagnosis of COVID-19 using symptom monitoring from smartphone embedded sensors. The model achieved an overall accuracy of 79% for detecting the COVID-19 cases. Menni [9] proposed a model combining symptoms to predict COVID-19 cases based on reported symptoms via a smartphone-based app. The study found that loss of smell and taste is a potential predictor of COVID-19 apart other symptoms such as high temperature and a new, persistent cough. Canas [31] presented an early detection model for COVID-19 cases using prospective, observational, longitudinal, and self-reported data from patients in the UK on 19 symptoms over three days after symptom onset. The experimental results showed that the hierarchical Gaussian model achieved higher performance than the logistic regression model. Some researchers have used FL in the COVID-19 detection system to protect patients' data because patients' data are sensitive and impacts patient security. Yan [21] proposed an FL for COVID-19 detection based on chest X-ray images. The study compared performances of four models (MobileNetv2, ResNet18, ResNeXt, and COVID-Net) and found that ResNet18 achieved the highest accuracy in both training with and without FL. Zhang [15] presented a novel dynamic fusion-based FL approach to detect COVID-19 infections using CT and chest X-ray images. The study conducted experiments using the following three models: GhostNet, ResNet50, and ResNet101 and found that the proposed approach achieved better performance than the default setting one for ResNet50 and ResNet101. Abdul [32] presented an FL model to identify COVID-19 cases using chest X-ray images and a descriptive dataset. The study found that using softmax activation function and stochastic gradient descent (SGD) optimizer achieved better performance. A brief summary of existing COVID-19 detection approaches is presented in Table 1. Unlike existing methods, we presented an FL framework with higher privacy for COVID-19 detection by adding differential privacy stochastic gradient descent (DP-SGD). We also provided strategies for improving model accuracy on Non-IID data and maintaining the robustness between security and accuracy of the COVID-19 detection model. By evaluating the proposed models on two different challenging datasets (chest X-ray images and symptom information), we found that convolutional neural network (CNN) model with adding spatial pyramid pooling (SPP) layer achieved the highest accuracy on the chest X-ray images model and artificial neural networks (ANNs) outperformed the other models such as long short-term memory (LSTM) and 1D CNN (1DCNN) for COVID-19 detection using symptom dataset. To the best of our knowledge, this is the first study to apply the DP-SGD in FL to detect COVID-19 cases based on chest X-ray images and symptoms. Approach In this section, we present a comprehensive overview of our privacy-preserving FL system for COVID-19 detection. This section is organized as follows: first, we introduce the overview of FL in Section 3.1. We then provide a comprehensive introduction of DP-SGD in Section 3.2. We explain the detail of our federated COVID-19 system model in Section 3.3. Finally, Sections 3.4 and 3.5 present various network models designed to recognize COVID-19 cases based on chest X-ray images and symptom information, respectively. Federated Learning The FL approach was introduced in 2016. It is a machine learning strategy in which multiple clients can collaboratively solve a machine learning problem, with each client storing their own data and sharing or transfering data with other clients [18]. FL ues less storage or computational resources on the central server than centralized learning, and it helps protect each client's private data. FL was initially implemented over several small devices [18,33]. The various implemented FL applications have significantly increased, including some, which might involve only a few clients in collaboration among institutions [34][35][36]. These two FL settings are called "cross-device" and "cross-silo", respectively. A typical FL training is achieved by following several basic steps. In the first step, all chosen clients download the current weight of the master model. Second, the clients compute the weight and update it independently based on their local data. Finally, all clients update their weight to the server, where they are gathered and aggregated to produce a new master model. These steps are repeated until a certain convergence criterion is satisfied. In our setting, we termed the Federated Averaging (FedAvg) [18] as our FL system. In this manner, the selected clients will compute the gradient of the loss on the current model using their local data for each communication round. Then, the server calculates a weighted average of the resulting models. The pseudo-code of FedAvg adapted from [18] is given in Algorithm 1. Algorithm 1 FederatedAveraging. The K clients are indexed by k, B is the local minibatch size, E is the number of local epochs, and η is the learning rate. Service executes: Differential Privacy Stochastic Gradient Descent (DP-SGD) Differential privacy is a strong standard for quantifying and limiting personal information disclosure [37][38][39]. It masks the contribution of any individual user by introducing a level of uncertainty into the released model. Privacy loss parameters ( , δ) quantify differential privacy, where denotes how much a person with output would be able to see the dataset, δ represent the probability that an unwanted event happens that leaks more data than normally. The smaller the ( , δ), the higher the privacy. We have a differential privacy definition as follows: a randomized algorithm A: D → R with domain D and range R is ( , δ)-differential private if for any subset of outputs S ⊆ R and for any two adjacent In the term of Federated Learning, we say that two decentralized datasets D and D are adjacent if they differ in a single entry, that is, if D can be obtained from D by adding or subtracting all the records of a single client. δ is preferably smaller than 1 |d| . Differential privacy guaranteeing may impact the accuracy or utility of our model. In the context of rich data, it appears that the model can offer both low privacy risk and high utility. However, for large datasets, the optimization methods must be scalable. Therefore, we used SGD to control the influence of training data during the training process as described in previous works [40][41][42] for our differential privacy setting. The DP-SGD strategy adds random Gaussian noise on the aggregated global model that is enough to hide any single client's update. It consists of the following steps: at each step of the DP-SGD, we compute the gradient for a random subset of examples, then clip these per-sample gradients into a fixed maximum norm 2 . Next, random noise is added to the clipped gradients in computing the average step. Finally, we multiply these clipped and noised gradients using the learning rate and apply the product to update model parameters. Clients perform perturbation on their gradients using the DP-SGD strategy after computing the training gradients based on their local data in Algorithm 1. System Model We developed our Federated COVID-19 detection system based on a client-server architecture implementing the FedAvg via local stochastic gradient descent (SGD) and addressing privacy risks with a DP-SGD guarantee. Our federated COVID-19 system includes three stages: clients synchronize with the server, clients compute the local models based on individual data, and the server aggregates the global model. The overall system architecture is shown in Figure 1. Clients synchronize with the server At the round t, a random C fraction of the clients were selected to connect to the server for computing the gradient of loss over all the data held by these clients. The selected clients download the current average model parameters θ t−1 average from the previous iteration. For the 1st iteration, the clients will use the same random initial model parameters θ 0 . Clients compute the local models based on individual data Each client locally computes the training gradients and updates independently based on their local data divided into B mini-batches for E epochs. The client then performs perturbation on their gradients using the DP-SGD technique described in Section 3.2. Finally, the client reports the learned model parameters (denoted as θ t k where k is the client index) to the server for averaging. Server aggregates global model Once the server receives the parameter updates from clients, the aggregation model will average the updates to produce the new model parameters based on the FedAvg approach. The overall complexity of the proposed scheme can be expressed as O(t × K × E × B) at max, where t, K, E, and B represent the total number of communication rounds, the total number of clients, the number of local epochs, and the local minibatch-size of clients, respectively. Network Models Designed for the Recognition of COVID-19 Using Chest X-ray Images To evaluate the performance of our FL system using chest X-ray images, we used four deep learning models, such as 2D CNN with 5 × 5 convolutional layers (5 × 5 CNN), residual neural networks (ResNets), 2D CNN with 3 × 3 convolutional layers (3 × 3 CNN), and 2D CNN with 3 × 3 convolutional layers and SPP (3 where ω denotes the model parameters. The detailed descriptions are presented in Table 2. Table 2. The complexity of the proposed models using chest X-ray images. Model No. Layers No. Parameters ω (Milion) The 5 × 5 CNN model was used to construct the decentralized classification model for MNIST digit recognition and has shown promising results [18]. The 5 × 5 CNN architecture includes two 5 × 5 convolution layers; the first convolution layer has 32 channels, the second layer has 64 channels, each layer followed with 2 × 2 max pooling, and the fully connected layer with 512 units and ReLu activations. Similar to [18], we used a 5 × 5 CNN with SGD optimizer function for COVID-19 detection using chest X-ray images. ResNets ResNets is a specific neural network proposed in [43]. ResNets are made up of residual blocks, which help improve the accuracy of models using skip connections. The skip connections in residual blocks solve the vanishing gradient problem in deep neural networks (DNNs). As a result, the model learns in such a way that the higher layer outperforms the lower layer. The ResNet has multiple variations, namely ResNet16, ResNet18, ResNet34, ResNet50 and ResNet101, which contains 16, 18, 34, 50, and 101 layers, respectively. ResNet has shown a compelling efficiency for COVID-19 detection using chest X-ray data [44][45][46][47]. Similar to these previous works, we applied ResNet18 and ResNet50 as one of the deep learning networks for our COVID-19 detection based on chest X-ray images. × CNN The 3 × 3 CNN was one of our evaluation models for COVID-19 detection based on chest X-ray images, which includes two 3 × 3 convolutional layers; the first layer with 32 channels, and the second layer with 64 channels followed with 2 × 2 max-pooling layer and a fully connected layer with 128 units, ReLu activation, and a final softmax output layer. Two dropout layers were added before and after the fully connected layer with the dropout probability of 0.25, and 0.5, respectively, to reduce overfitting. Figure 2 shows our proposed 3 × 3 CNN architecture for COVID-19 detection. × CNN-SPP The SPP layer was first introduced in [48], which helps the CNN model agnostic input image size. The Bag of Words (BoW) approach inspires SPP [49], which pools the features together. SPP outperforms conventional pooling by capturing more spatial information and accepting arbitrary input size. To adopt SPP in a deep network, we must replace the last pooling layer with an SPP layer, such as pooling layer after the last convolutional layer. Network Models Designed for the Recognition of COVID-19 Using Symptom Data To evaluate the performance of our FL system using symptom data, we used three machine learning models such as 1DCNN, ANN, and LSTM. The complexity of each model is O(ω), where ω denotes the model parameters. The detailed descriptions are presented in Table 3. 1DCNN CNN not only achieves excellent performance on computer vision tasks such as object detection [55,56], image classification [57], image generation [58], tracking task [59], and face recognition [60], but also can be used to sequence data [61]. Some works have achieved great results in COVID-19 detection using CNN architecture on textual data. For example, Lella et al. [62] used the 1DCNN with augmentation to diagnose COVID-19 in human-based respiratory sounds such as cough, breath, and voice. In this study, we applied the 1DCNN to verify COVID-19 cases based on symptom data. Our model contained three 1DCNN layers, each with a convolutional kernel size of three and a one-step stride. The output size for all layers is 64. After the third layer, the dropout function with a probability of 0.5 is applied to prevent the model from overfitting. Then, a 1D max-pooling layer is applied to reduce the output dimension. Finally, a softmax layer is applied to calculate the probability of each output. ANN The ANN is a computer simulation based on the human brain that allows the machine to learn and make decisions [63]. There are different layers in an ANN structure, each layer is arranged as a vector with several single units called neurons. Each input layer performs mathematical processing to produce the output layer, which serves as the input for the next layer. Figure 4 represents the structure of an ANN. Because of these abilities, ANN has been applied to different machine learning tasks such as time-series prediction [64] and computer vision task [65,66], and has produced reliable results. In the COVID-19 prediction task, several works using the ANN structure have yielded excellent results [67][68][69]. Furthermore, Hayat Khaloufi et at. [30] conducted a highlight study in which a customized ANN was proposed to predict COVID-19 from the collected dataset, which can help predict whether a patient is infected based on their symptoms. The proposed model outperformed other traditional machine learning methods. In this study, we employed an ANN structure with four hidden layers. The hidden size of the four layers is 64, 128, 128, and 2, respectively. A dropout function with a probability of 0.5 is applied after the third layer to reduce the dimensional output vector and speed up the training process. Finally, a softmax function is applied after the last layer to produce the probability of our outputs. LSTM An RNN is considered another type of ANN, which can collect data across sequential steps and process one element of sequential data at a time. Unlike the traditional neural networks, the output of RNN depends on the primary elements within the sequence, which makes RNN suitable for solving sequential or time-series data. The LSTM is a variant of RNN, which was first proposed by Hochreiter et al. [70] to tackle the vanishing and exploding gradients problems that commonly happened from the conventional RNN. In general, a typical LSTM cell is comprised of four different gates such as forget gate, input gate, cell gate, and output gate. The structure of an LSTM cell is presented in Figure 5. In each operation, the LSTM cell processes an given input sequence x = [X 1 , X 2 , ..., X T ] to produce an output hidden sequence h = [h 1 , h 2 , ..., h T ] using the following equations iteratively from t = 1 to T: where f t , i t ,c t , and o t are the forget gate, the input gate, the candidate cell gate, and the output gate at time t, respectively; are the bias vectors; denotes the Hadamard product; σ and tanh denotes a sigmoid and a tangent activation function, respectively. With the great success in solving sequence data, LSTM has been applied to COVID-19 detection tasks and achieved great performances [71][72][73][74][75]. For instance, ArunKumar et al. [76] proposed a deep learning approach that modified the traditional LSTM with a new activation function for predicting the infected cases and death cases of the COVID-19 dataset. In this paper, we employed a simple stacked of three LSTM layers with a hidden size of 64 to predict our COVID-19 symptom dataset. The LSTM has a dropout layer with a probability of 0.25. After the LSTM, two Linear layers are added to reduce the output dimension. Finally, a softmax layer is applied to calculate the probability of each output. Data Collection and Processing Two types of COVID-19 datasets, chest X-ray images and symptom data, were used to train and evaluate our FL system. Chest X-ray Dataset Following the work of a group of researchers from Qatar University and the University of Dhaka, Bangladesh, and collaborators from Pakistan, Malaysia, and medical doctors [77,78], we collected a dataset containing 3616 COVID-19 positives, 10,192 normal and 1345 viral pneumonia chest X-ray images. The COVID-19 data were collected from the various publicly accessible datasets, online sources, and published papers [79][80][81][82][83][84], normal data were collected from two different datasets [85,86], and viral pneumonia data were collected from chest X-ray images (pneumonia) database [86]. Few samples of chest X-ray images are shown in Figure 6. For each class, we randomly kept 200 images for testing data and the rest for training. The statistics of the chest X-ray dataset are shown in Table 4. The symptom dataset for COVID-19 cases is based on a list of symptoms published by WHO in May 2020 from India. It is provided by [87]. The symptom dataset contains 5434 samples with 21 columns such as breathing problem, fever, dry cough, sore throat, running nose, asthma, chronic lung disease, headache, heart disease, diabetes, hypertension, fatigue, gastrointestinal, abroad travel, contact with COVID-19 patient, attended a large gathering, visited public exposed places, family working in public exposed places, wearing masks, sanitization from markets, and COVID-19. Each column contains a "Yes" or "No" value. We did some data processing before feeding the data into the network model using the following steps: • Removing the columns containing unique values because these columns provide no useful information for our model; • Converting categorical data to one-hot encoding data as follows: 1 represents "Yes" and 0 represents "No"; • For each COVID-19 class, we randomly kept 10% for testing and used the rest (90%) for training. Table 5 presents the statistics of the symptom dataset. Improvement in COVID-19 Detection Based on Chest X-ray Images and 3 × 3 CNN-SPP To choose an effective model for our federated COVID-19 detection based on chest Xray images, we conducted experiments using four different models discussed in Section 3.4. We tested each model by running experiments with the number of clients K = 3, clientfraction C = 0.33 (one client per round), local epoch E = 1, client learning rate η = 0.02, and the local minibatch size B = 20. As shown in Figure 7, the 3 × 3 CNN model achieved better accuracy than 5 × 5 CNN, Resnet18, and Resnet50 methods. Furthermore, the model was further improved by adding an SPP layer. Our proposed 3 × 3 CNN-SPP model achieves the highest accuracy of 95.32% after 1000 communication rounds. Adopting the SPP layer allows our deep convolutional neural network is able to generate representations from arbitrarily sized images. The 3 × 3 CNN-SPP is therefore able to extract features at different scales and capture more spatial information. As such, the classification performance is improved. Therefore, we used this 3 × 3 CNN-SPP model for our COVID-19 detection system based on chest X-ray images. The 5 × 5 CNN model achieved a faster convergence compared to the other models. In addition, the accuracy of all models steadily increases after 200 communication rounds and keeps being stable. Improvement in COVID-19 Detection Based on Symptom Data and ANN We evaluated three models discussed in Section 4.3 for our federated COVID-19 detection using symptom data. For each model, we conducted the experiment with the number of clients K = 4, client-fraction C = 0.25 (one client per round), local epoch E = 1, client learning rate η = 0.02, and local minibatch size B = 20. As shown in Figure 8, our proposed ANN achieved a more favorable performance than 1DCNN and LSTM, with the highest accuracy of 96.65%. Owning the functionality of hosting multiple data points through the neurons to perform the mathematical processing to produce the outputs. ANN applies the learnable weight for each neuron and updates it by the cost function after each iteration to fit the training data. For that reason, the ANN has shown great success when dealing with a dataset that has a non-linear between the input and output variables. As such, the accuracy is improved. Therefore, we used this ANN model as our COVID-19 detection model using the symptom dataset. The accuracy of 1DCNN and ANN models steadily increases after 200 communication rounds and keeps stable until reaching a certain convergence at 600 communication rounds, while the LSTM model reaches the convergence very early at the first round. IID and Non-IID Unlike centralized models, FL usually faces a Non-IID problem [18] in which the size and distribution of local datasets will typically vary heavily between different clients because each client corresponds to a particular user, geographic location, or time window. For instance, the chest X-ray dataset with an imbalance label size is shown in Figure 9, where each client owns data samples with a fixed number of k label classes. It resulted in the client's local models having the same initial parameters converging to different models, and aggregating the divergent models on the server can slow down convergence and worsen the learning performance. In this study, we compared the performance of our model on both IID and Non-IID datasets. For the IID dataset, each client is randomly assigned a uniform distribution over all classes. For the Non-IID dataset, we first sort the data using the class label. We then divide data into two cases: (1) Non-IID(1), 1-class non-IID, where each client receives data partition from only a single class, and (2) Non-IID(2), 2-class Non-IID, where each client receives data partition from not more than two classes. We did not consider Non-IID(2) for the symptom dataset because the symptom dataset only contains two classes. We used the same parameters in Sections 4.2 and 4.3 for chest X-ray and symptom models. As shown in Figure 10 and Table 6, a significant reduction is observed on Non-IID data than IID data on our chest X-ray images. The maximum accuracy reduction occurs for the most extreme 1-class non-IID(1), approximately 49.71 to 55.32%. For 2-class non-IID(2), the accuracy reduction was approximately 14 to 24%. A similar observation was made with the symptom dataset; Figure 11 and Table 7 show a 1.28% to 2.29% reduction in accuracy in Non-IID(1) compared to IID. From these experiments, we found that non-IID data are one of the major issues of the FL system because non-IID contains different sizes and distributions of local datasets for each client. It resulted in the client's local models being significantly different from each other, and aggregating the divergent models on the server can slow down convergence and significantly reduce model accuracy. Therefore, we must find a way that can improve our model performance on non-IID data. In the next section, we will propose a strategy to improve our Non-IID(1) data performance. Non-IID Improvement In this section, we evaluate different parameters in our Non-IID(1) setting to determine the relationship between these parameters and our model performance. We first experiment with total client K, client-fraction C, and local mini batch-size B on models using chest X-ray images. We then further validate these parameters on the model using the symptom dataset. Non-IID with Different Numbers of Client We first evaluated the model's performance using chest X-ray images on Non-IID(1) with various total numbers of clients (3, 30, and 300) while keeping the other parameters: client-fraction C = 0.33, local epoch E = 1, client learning rate η = 0.02, and the local minibatch size B = 20. Figure 12 and Table 8 show the impact of varying K for our COVID-19 detection model. The results demonstrated that using a larger number of clients (K = 30 and K = 300) significantly improves for our Non-IID(1) setting than the model using K = 3. This can be explained by using many clients for our Non-IID(1) setting; some clients may receive data from the same class. Therefore, if our model has previously learned similar data patterns from previous clients, it will easily recognize the patterns of the current client. This improved the model's accuracy on chest X-ray images. Increasing Client-Fraction In this experiment, we evaluated our model using chest X-ray images with clientfraction C, which controls the amount of multi-client parallelism. To compute this, we fixed local epoch E = 1, batch-size B = 20, client learning rate η = 0.02, and number of total client K = 300 (achieved best performance in previous Section 4.5.1), while changing the ratio of client-fraction C with varying value ∈ {0.1, 0.3, 0.6, 0.9, and 1.0}. As shown in Figure 13, using the larger client-fraction improved our model accuracy. Moreover, the model with a larger client-fraction helped the model converge faster with the same number of communication rounds. Increasing Computation per Client The local batch-size B is the last parameter we used to evaluate the effect on the Non-IID(1) model. We fixed client-fraction C to 1.0, which showed the improvement results in the previous section, local epoch E to 1, client learning rate η to 0.02, and the number of clients K to 300, while the local batch-size value will be selected via varying value ∈ {1, 20, 100, 200, and 500}. Figure 14 shows that with 1000 communication rounds, the model using chest X-ray images with small batch-size B = 1 achieved the lowest accuracy of approximately 63.33%, while larger batch-size B = 20 achieved an improvement result with 77.56% of accuracy, and the best result was achieved by the model using largest batch-size B = 200 with 79.56% of accuracy. In addition, the model using a larger batch-size achieves convergence faster compared to the model with a smaller batch-size because a larger batch-size means using a larger amount of data. NonIID and Refined NonIID In Sections 4.5.1-4.5.3, we discovered that each component has affection on chest X-ray model performance. We then combined these parameter values to see how much our model accuracy can be increased compared to the baseline model. The baseline model is Non-IID(1) used in Section 4.4 with the number of clients K = 3, fraction C = 0.33, client learning rate η = 0.02, and local batch-size B = 20. Non-IID(300 clients) is a modification of the baseline model with K = 300 clients. The final refined model is a refined model from Non-IID (300 clients) with the number of client-fraction = 1.0, and local batch-size = 200. Table 9 shows that using K = 300 clients helped improve accuracy by up to 33.97% compared to the baseline model using three clients. Furthermore, we further improved the accuracy model of Non-IID (300 clients) by up to 4.7% with an increasing client-fraction and local batch-size. Table 9. Non-IID with refined Non-IID based on chest X-ray images. Experiments on Symptom Data From Section 4.4, we observed a slight deduction of the symptom model on Non-IID(1). We now examined whether tuning parameters (number of total clients K, clientfraction C, and local mini batch-size) of the chest X-ray model are also efficient with the symptom model. As shown in Figure 15, using a larger client-fraction C showed slight improvement for model performance on Non-IID(1). However, increasing the number of clients K and batch-size B did not improve the performance of our symptom model on Non-IID(1), as shown in Figure 16. Table 10 shows that our refined model has slightly improved with 0.31 to 0.82% accuracy compared to Non-IID(1) and almost reached the model's accuracy on IID data. 4.6. Privacy Improvement for Federated COVID-19 Detection Model Using DP-SGD FL helps mitigate privacy risks associated with centralized machine learning without sharing each client's private data. However, the adversary might infer our information from the shared gradients from the previous model of our FL system. To make it much more challenging for an adversary to breach privacy, we applied DP-SGD by adding a random Gaussian noise on local gradients in the aggregating step on the server's model. In this section, we first examine how much the different levels of privacy affects our model performance. We then provide the strategy to keep the robustness of differential privacy and model's accuracy. Our FL with DP-SGD for a COVID-19 detection system is shown in Figure 1. Trade-Off between the Model Privacy and Accuracy For the chest X-ray model, we conducted experiments for the IID dataset setting with number of total clients K = 3 and with varying noise values to see how much differential privacy impacts our utility model. A lower value means the model has higher security. We set δ = 10 −5 , client-fraction = 0.33, client learning rate η = 0.02, and batch-size = 20. As shown in Figure 17, we found a trade-off between our privacy and model accuracy. The more noise we add to our model, the more reduced our model accuracy is. The similar observations were made when experiments on the symptom model. Figure 18 presents a trade-off between model privacy and performance when we conducted experiments with varying noise values. When the noise value was set to 1.0, the model achieved the highest accuracy, but it was less secure with the largest value = 1600. In contrast, the model achieved the most security but also worse accuracy when the noise value was set to 5.0. We kept the number of total clients K to 4, client-fraction C to 0.25, client learning rate η to 0.02, δ = 10 −5 , and batch-size to 20 in these experiments. Therefore, we wanted to find a way that helps us reduce the model privacy risk, but we can still keep the similar utility of our model. We first experimented with three system parameters: fraction of model q (number of clients per round/total number of clients), number of total clients, and the noise value on our chest X-ray model. In our experiments, we scaled up the total number of clients while keeping the fraction of the model constant, and the noise value was scaled up using varying values ∈ {0.1, 0.3, 0.7, 1.3, 1.9, and 2.1}. As shown in Figure 19 and Table 11, by increasing the total number of clients while keeping the fraction of the model constant, we could add the larger noise value, and the model achieved a similar utility and higher privacy. For example, after increasing the number of clients from 180 to 270, we could increase the noise value from 1.3 to 1.9, lowering a half value from 97.39 to 46.36 while the model's accuracy only reduced by 0.94% from 70.21% to 69.27%. Figure 19. The robustness of differential privacy on chest X-ray images. The same conclusion was reached on the symptom model; we achieved robustness of differential privacy and accuracy by increasing the total number of clients while keeping the fraction of model constant, and scaling up noise proportionally. As shown in Table 12, after increasing the number of clients from four to sixty while keeping the fraction of model q constant, we could reduce by more than ten times value from 1600 to 120 while the model's accuracy only reduced by 0.17% from 93.56% to 93.39%. Conclusions This study presented a higher privacy-preserving FL system for COVID-19 detection based on two types of datasets: chest X-ray images and symptom data. Through experiments on seven deep learning models: 5 × 5 CNN, ResNets, 3 × 3 CNN, 3 × 3 CNN-SPP, ANN, 1DCNN, and LSTM, our federated COVID-19 detection model using 3 × 3 CNN-SPP and ANN achieved the best accuracy of 95.32% on chest X-ray images and 96.65% on symptom data, respectively. We first showed that the accuracy of FL for COVID-19 identification reduced significantly for Non-IID data. As a solution, we proposed a strategy to improve accuracy on Non-IID data by increasing the total number of clients, parallelism (client-fraction), and computation per client (batch size). Experiments showed that Non-IID model accuracy could be increased by 18.41% for chest X-ray images and 0.82% for symptom data. Second, to enhance patient data privacy for our FL model, we applied a DP-SGD that is resilient to adaptive attacks using auxiliary information. Finally, we proposed a strategy to keep the robustness of FL to ensure the security and accuracy of the model by keeping the fraction of the model constant and proportionally scaling up the total number of clients and the noise. In our future work, we would like to implement this method using a larger dataset available from various hospitals worldwide. Furthermore, we hope that our proposed privacy-preserving FL framework enhances data protection for collaborative research to fight the COVID-19 pandemic. Institutional Review Board Statement: Ethical review and approval were waived for this study because all data derived from the public database. Informed Consent Statement: Patient consent was waived because all data were derived from the public database. Conflicts of Interest: The authors declare no conflict of interest.
9,693.6
2022-05-01T00:00:00.000
[ "Computer Science" ]
Best-First Beam Search Decoding for many NLP tasks requires an effective heuristic algorithm for approximating exact search since the problem of searching the full output space is often intractable, or impractical in many settings. The default algorithm for this job is beam search -- a pruned version of breadth-first search. Quite surprisingly, beam search often returns better results than exact inference due to beneficial search bias for NLP tasks. In this work, we show that the standard implementation of beam search can be made up to 10x faster in practice. Our method assumes that the scoring function is monotonic in the sequence length, which allows us to safely prune hypotheses that cannot be in the final set of hypotheses early on. We devise effective monotonic approximations to popular nonmonontic scoring functions, including length normalization and mutual information decoding. Lastly, we propose a memory-reduced variant of Best-First Beam Search, which has a similar beneficial search bias in terms of downstream performance, but runs in a fraction of the time. Introduction Beam search is a common heuristic algorithm for decoding structured predictors, e.g., neural machine translation models and transition-based parsers. Due to the widespread adoption of recurrent neural networks and other non-Markov models, traditional dynamic programming solutions, such as the Viterbi algorithm (Viterbi, 1967), are prohibitively inefficient; this makes beam search a common component of many state-of-the-art NLP systems. Despite offering no formal guarantee of finding the highest-scoring hypothesis under the model, beam search yields impressive performance on a variety of tasks-unexpectedly providing a beneficial search bias over exact search for many tasks (Stahlberg and Byrne, 2019). Within NLP, most research on beam search has focused on altering the log-probability scoring function to return improved results, e.g., higher BLEU scores Murray and Chiang, 2018;Shu and Nakayama, 2018;Yang et al., 2018) or a more diverse set of outputs (Vijayakumar et al., 2016). However, little work has been done to speed up beam search itself. Filling this gap, this paper focuses on reformulating beam search in order to make it faster. We propose best-first beam search, a prioritized version of traditional beam search which is up to an order of magnitude faster in practice while still returning the same set of results. We additionally discuss an even faster heuristic version of our algorithm which further limits the number of candidate solutions, leading to a smaller memory footprint while still finding good solutions. Concretely, we offer a novel interpretation of beam search as an agenda-based algorithm where traditional beam search is recovered by employing a length-based prioritization scheme. We prove that a specific best-first prioritization scheme, as in classic A˚search (Hart et al., 1968), allows for the elimination of paths that will necessarily fall off the beam; for many scoring functions, including standard log-probability scoring, we can still guarantee the same k hypotheses as traditional beam search are returned. Indeed, our algorithm returns beam search's top hypothesis the first time it encounters a complete hypothesis, allowing the program to stop early. Further, we discuss the application of best-first beam search to several popular scoring functions in the literature Li et al., 2016); this demonstrates that we have a general framework for adapting a variety of rescoring methods and alternate objectives to work with our algorithm. Empirically, we compare best-first beam search to ordinary beam search on two NLP sequenceto-sequence tasks: neural machine translation (NMT) and abstractive summarization (AS). On NMT, we find that our algorithm achieves roughly a 30% speed-up over traditional beam search with increased gains for larger beams (e.g., « 10x for a beam of 500). We find similar results hold for AS. Finally, we show that our memory-reduced version, which limits the number of active hypotheses, leads to additional speed-ups over best-first beam search across beam sizes while maintaining similar BLEU scores. Our code is available online at https://github.com/rycolab/bfbs 2 Sequence Transduction A core operation in structured prediction models is the determination of the highest-scoring output for a given input under a learned scoring model. where x is an input and Ypxq is a set of wellformed outputs for the input. An important example of (1) is maximum a posteriori (MAP), Our work focuses on sequence-to-sequence transduction: predicting an output sequence given an input sequence. One such task is machine translation, wherein a source-language sentence is mapped ("transduced") to a target-language sentence. While our exposition focuses on sequence-to-sequence prediction, our algorithms are directly applicable to any sequential structured prediction model, such as transition-based parsers (Nivre et al., 2008) and sequence taggers (McCallum et al., 2000;Lafferty et al., 2001). Notation. Let x " xx 1 , . . . , x Nx y be an input sequence of length N x and, likewise, let y " xy 1 , . . . , y Ny y be an output sequence of length N y . Each y t is an element of V, the set of output tokens. Finally, let Ypxq be the set of all valid output sequences (i.e., complete hypotheses). For the task of language generation, which we focus on experimentally, this set is defined as where˝is string concatenation and V ănmaxpxq is the set of all subsets of V ‹ of size ă n max pxq. In words, every valid sequence begins and ends with distinguished tokens (BOS and EOS, respectively). 1 Furthermore, each sequence has at most length n max pxq-which is typically dependent on x-a restriction we impose to ensure termination. Some applications may require a stronger coupling between Ypxq and x (e.g., |x| " |y|). We drop the dependence of Y and n max on x when it is clear from context. Scoring. We consider a general additively decomposable scoring model of the form scorepx, yq " This framework covers a variety of modeling methodologies including probabilistic transducers (both globally and locally normalized) and nonprobabilistic models such as maximum-margin techniques (Taskar et al., 2004). Most importantly, (4) covers MAP decoding (2) of neural sequenceto-sequence modelsà la Sutskever et al. (2014): 2 score s2s px, y ăt˝yt q " log ppy t | y ăt , xq (5) We note that (5) is the scoring function used for decoding many language generation models. Beam search. The worst-case running time of exactly computing (1) is exponential in n max ; namely, Op|V| nmax q. 3 Beam search is a commonly used approximation to (1) in NMT and language generation tasks. It is used in many (if not most) stateof-the-art NLP systems Serban et al., 2017;Edunov et al., 2018;Yang et al., 2019). Beam search may be understood as a pruned version of the classic path-search algorithm, breadthfirst search (BFS), where the breadth is narrowed to the beam size k. Pseudocode is given in Alg. 1. Although, beam search does not solve (1) exactly, it is a surprisingly useful approximation for NLP models. In many settings, beam search outperforms exact methods in terms of downstream evaluation (Koehn and Knowles, 2017;Stahlberg and 1 BOS and EOS are typically members of V. Often, EOS counts towards the nmax length limit while BOS does not. This is reflected in (3). 2 To see why, apply exp (an order-preserving transformation): exppscores2spx, yqq " exp´ř Ny t"1 log ppyt | yăt, xq¯" ś Ny t"1 ppyt | yăt, xq " ppy | xq. Byrne, 2019). For the remainder of this paper, we will pivot our attention away from exact solutions to (1) to exact solutions to the beam search output. Definition 2.1. k-optimal hypothesis. We say that a hypothesis is k-optimal if it is the top hypothesis returned by beam search with beam size k. A˚Beam Search We develop a meta-algorithm that is parameterized by several choice points. Our general search algorithm for decoding (Alg. 2) takes an arbitrary prioritization function, stopping criterion, and search heuristic. With certain values of these attributes, we recover many common search algorithms: greedy search, beam search, best-first search (Dijkstra, 1959), and A˚search (Hart et al., 1968). We propose an alternate prioritization function for beam search that allows for faster decoding while still returning the same k-optimal set of hypotheses. 4 Often, the score function is additively decomposable in t, such as (5). Implementations can exploit this fact to make each score evaluation (line 9) Op1q rather than Optq. We did not make this implementation detail explicit in Alg. 1 or Alg. 2 for generality and simplicity. 5 If the last token of y 1 is the end symbol (e.g., EOS), then y 1 is not expanded any further. One can either regard y 1 as any other hypothesis albeit with y 1˝y t " y 1 or keep appending EOS (i.e y 1˝y t " y 1˝E OS ) so that time step and length can be regarded as synonymous. We adopt the latter standard for comparability with subsequent algorithms. Here we review the components of our meta algorithm (the highlighted sections in Alg. 2) that can be varied to recover different search strategies: 1 : yˆy Ñ tTrue, Falseu. A priority queue Q maintains the set of active hypotheses. Elements in this set are ordered according to a generic comparator . When its peekpq (or poppq) methods are called, the first element ordered by is returned (or returned and removed). evaluated. If we take k " 8, we recover unpruned search algorithms. Recovering Beam Search. To recover beam search from Alg. 2, we use the choice points from Tab. 1. Explicitly, the comparator prioritizes hypotheses from earlier time steps first, but breaks ties with the hypotheses' scores under the model. We note that while the standard algorithm for beam search does not prioritize by score within a time step, variations of the algorithm use this strategy so they can employ early-stopping strategies (Klein et al., 2017;. Beam search terminates once either all hypotheses end in EOS or the queue is empty (i.e., when the k beams have been extended n max time steps but none end in EOS). In the second case, no complete hypothesis is found. Finally, choosing the heuristic hpx, yq " 0 makes the algorithm a case of standard best-first search. Note that, while standard beam search returns a set, Alg. 2 only returns the k-optimal hypothesis. This behavior is sufficient for the majority of use cases for beam search. However, if the full set of k hypotheses is desired, the stopping criterion can be changed to evaluate true only when k hypotheses are complete. Under the other beam search settings, this would provably return the same set as beam search (see § 4.1). Recovering A˚. To recover the traditional As earch algorithm, we use the comparator that prioritizes hypotheses with a higher score first; ties are broken by hypothesis length. The algorithm terminates when the first item of Q contains an EOS. If we take k " 8, best-first beam search recovers A˚. Any admissible heuristic may be used for hpx, yq. Definition 3.1. Admissible Heuristic. A heuristic h is admissible if it never overestimates the future cost-or underestimates the future reward-of continuing down a path. Best-First Beam Search In its original form, A˚search may traverse the entire Op|V| nmax q graph, which as discussed earlier, is intractable for many decoding problems. While standard beam search addresses this problem by limiting the search space, it still has computational inefficiencies-namely, we must analyze k hypotheses of a given length (i.e., time step), regardless of how poor their scores may already be, before considering longer hypotheses. However, prioritization by length is not strictly necessary for finding a k-optimal hypothesis. As is done in A˚, we can use score as the prioritization scheme and still guarantee optimality-or k-optimality-of the paths returned by the algorithm. We define A˚beam search as the A˚algorithm where breadth is limited to size k. Further, we define best-first beam search as the case of A˚beam search when no heuristic is used (see Tab. 1 for algorithm settings). This formulation has two large advantages over standard beam search: (1) we gain the ability to remove paths from the queue that are guaranteed to fall off the beam and (2) we can terminate the algorithm the first time a complete hypothesis is encountered. We can therefore reduce the computation required for decoding while still returning the same set of results. The mathematical property that makes this shortcircuiting of computation possible is the monotonicity of the scoring function. Note that not all scoring functions are monotonic, but many important ones are, including log-probability (5). We discuss effective approximations for popular nonmonotonic scoring functions in § 5. Definition 3.2. Monotonicity. A scoring function scorep¨,¨q is monotonic in t if for all x, y ăt " xy 1 . . . y t´1 y, y t P V, 1 ď t ď n max scorepx, y ăt q ě scorepx,y ăt˝yt q Clearly, (5) is a monotonic scoring function in t because score s2s ď 0, that is, the score of a partial hypothesis y ăt can only decrease if we extend it by another symbol y t . This implies we can order our search according to scorepx, y ăt q without fear of overlooking a hypothesis whose score would increase over time. Furthermore, once k hypotheses of a given length t have been evaluated, we no longer need to consider any hypothesis where |y| ă t since such hypotheses would necessarily fall off the beam. We can therefore remove such hypotheses from the queue and avoid wasting computational power on their evaluation. We prove this formally in § 4.1. Another implication of the monotonicity property of score is that we may terminate best-first beam search once a hypothesis containing EOS is encountered (i.e., the end state is found). If the full set of k complete hypotheses is desired, then we simply continue until k hypotheses have reached EOS. We prove the k-optimality of these hypotheses under best-first beam search in § 4.1. Implementation Details Standard beam search forms a separate set of active hypotheses for each time step, i.e., each B t is its own set. Once B t has been narrowed down to the top k, the previous B ăt can be forgotten. However in best-first beam search, since hypotheses are not evaluated in order of time step, we may need to keep B t from several time steps at any given point. A naive implementation of best-first beam search is to keep a single priority queue with all the active hypotheses ordered by current score. However, each push to the queue would then require Oplogpn max k|V|qq time. We can reduce this runtime by instead keeping a priority queue of beams, where the priority queue is ordered by the highestscoring hypothesis from each beam. Further, each beam can be represented by a min-max queue (Atkinson et al., 1986); this allows us to limit the size of B t to k: we can check in Op1q time if a hypothesis is in the top-k before adding it to B t . A potential inefficiency, which we avoid, comes from updating B t`1 , which we must do when evaluating a hypothesis from B t . Since all beams are stored in a queue, there is no guarantee of the location in the queue of B t`1 . To avoid Opn max q lookup, we can keep a pointer to each beam, indexed by t making the lookup Op1q. However, we acquire a Oplog n max q term to update the queue of beams as B t`1 may change priority. Memory-Reduced Best-First Beam Search. A major drawback of the A˚algorithm is its memory usage, which in the worst-case is Opb d q for breadth width b and maximum depth d. In the A˚formulation of beam search, where the breadth width is limited to the beam size, this amounts to worst-case Opk¨n max q memory usage, where standard beam search has Opkq memory usage. While in many settings the multiplicative factor may be insignificant, for neural sequence models it can be prohibitive; this is due to the large amount of memory required to store each hypothesis (e.g., prior hidden states needed to compute subsequent scores for scoring functions parameterized by neural networks). We propose a variant of best-first beam search that limits memory usage, i.e., the queue capacity. Specifically, if we reach the chosen queue capacity, we remove the worst scoring active hypothesis from the earliest active time step. This can easily be done in Op1q time given our pointer to each beam. Correctness We show the equivalence of the top hypothesis 6 returned by beam search and best-first beam search when scorep¨,¨q is monotonically decreasing in t, length-based prioritization is used and the beam size k is the same for both algorithms. Without loss of generality, we hold x constant in all the following proofs. Note that we take the terms pop and push from queue terminology. Specifically, "popping a hypothesis" refers to making it past line 7 of Alg. 2, where a hypothesis y is expanded by y t P V. In path search terminology, this would be equivalent to visiting a node and adding the edges from that node as potential paths to explore. Lastly, we refer to the priority queue used by beam search and bestfirst beam search as Q BS and Q A˚, respectively. Lemma 4.1. Best-first beam search evaluates all hypotheses of a given length t in order of their score. Proof. We prove the lemma by induction. The lemma holds trivially for the base case of hypotheses of length 0 because the only hypothesis of length 0 is xBOSy. Now, by the inductive hypothesis, suppose Lemma 4.1 holds for all hypotheses of length ă t. We will show it must also hold for hypotheses of length t. Consider two competing hypotheses: y " y ăt˝yt and y 1 " y 1 ăt˝y 1 t . Note that |y ăt | " |y 1 ăt | " t´1. Suppose scorepx, y 1 q ă scorepx, yq. Case 1: scorepx, y 1 ăt q ă scorepx, y ăt q. Then by induction, y ăt popped first and y is pushed to Q before y 1 . Since scorepx, y 1 q ă scorepx, yq, y will be popped before y 1 . Case 2: scorepx, y ăt q ă scorepx, y 1 ăt q. Then by induction, y 1 ăt is popped first and y 1 is added to Q before y. But, since scorepx, y 1 q ă scorepx, yq ď scorepx, y ăt q by monotonicity, then y ăt will be popped before y 1 . Consequently, y will be pushed to Q before y 1 is evaluated. By the rules of the priority queue y will be evaluated before y 1 . Case 3: scorepx, y 1 q " scorepx, y). The lemma holds if either y or y 1 is popped first. By the principle of induction, Lemma 4.1 holds for all t P N ą0 . Lemma 4.2. The first hypothesis that best-first beam search pops that ends in EOS is k-optimal. Proof. Let y be the first hypothesis popped by bestfirst beam search ending in EOS. By rules of the priority queue, no other active hypothesis has a higher score than y. Additionally, by monotonicity of the scoring function, no other hypothesis can subsequently have score greater than y. Therefore y must be k-optimal. Lemma 4.3. If best-first beam search pops a hypothesis, then beam search necessarily pops that same hypothesis. Proof. We prove the lemma by induction on hypothesis length. The base case holds trivially: For hypotheses of length 0, both best-first beam search and beam search must pop the xBOSy as it is the only item in the queue after initialization. By the inductive hypothesis, suppose Lemma 4.3 holds for hypotheses of length ă t. Suppose bestfirst beam search pops a hypothesis y " y ăt˝yt of length t. Case 1: Best-first beam search pops k hypotheses of length t´1 before popping y, which is of length t. The sets of hypotheses of length t´1 that each algorithm pops are necessarily the same by the inductive hypothesis and the fact that they have the same cardinality. If best-first beam search pops y, which is of length t, then it must be in the top-k highest-scoring hypotheses of length t in Q A˚b y the rules of the priority queue. Consequently, it must be in the top-k in Q BS . Case 2: Best-first beam search has popped fewer than k hypotheses of length t´1 before popping y. Then, all remaining hypotheses of length t´1 in Q A˚m ust have scorepx, y 1 ăt q ă scorepx, yq by the rules of the priority queue. By the monotonicity of the score function, all extensions of those y 1 ăt will also have scorepx, y 1 ăt˝y 1 t q ă scorepx, yq. Because none of y 1 ăt˝y 1 t has greater score than y, y must be in B t . Corollary 4.3.1. Best-first beam search will never pop more hypotheses than beam search. Theorem 4.4. Once best-first beam search has popped k hypotheses of length t, hypotheses from time steps ă t do not need to be popped. Proof. This follows from Lemma 4.1. If k hypotheses of length t have been popped, then these must be the top-k hypotheses from time step t. Therefore no hypothesis from time step ă t that is still in Q A˚w ould be in the top-k at time step t. Proof. Since |H BS | " |H A | " k, we only need to show y P H BS ùñ y P H A . Suppose, by way of contradiction, there exists a hypothesis y P H BS such that y R H A . If y R H A then we must not pop the prefix y ăt (where y " y ăt˝yt:|y| ) for some time step t ă |y|. Case 1: At some time step t`j (j ě 0), we pop k partial hypotheses ty p1q ďt`j , . . . , y pkq ďt`j u where y ďt`j R ty p1q ďt`j , . . . , y pkq ďt`j u. By Lemma 4.1, it must be that scorepx, y piq ďt`j q ą scorepx, y ďt`j ) @i P 1, . . . , k. This implies that for beam search, y ďt`j would not be in the top-k paths at time step t`j since by Lemma 4.3, paths ty p1q ďt`j , . . . , y pkq ďt`j u would also be evaluated by beam search. Therefore y cannot be in H BS , which is a contradiction. Case 2: For no time step t`j (j ě 0) do we pop k paths. This can only happen if the algorithm stops early, i.e we have found k complete hypotheses y p1q , . . . , y pkq . If this is the case, then by rules of the priority queue, each y p1q , . . . , y pkq must have score greater than scorepx, y ăt q. By monotonicity of the score function, scorepx, y piq q ą scorepx, y). This implies y cannot be in H BS , which is a contradiction. Non-monotonic Scoring Functions. Nonmonotonic scoring functions (definition 3.2) break the assumptions of § 4.1, in which case best-first beam search is not guaranteed to return a k-optimal hypothesis. However, when the scoring function is boundable from above, we can alter the original stopping criterion ( 2 in Alg. 2) such that k-optimality is again guaranteed. Given our assumed restriction on the search space-namely, |y ‹ P Ypxq| ď n max pxq-we can upper-bound the maximal score of any hypothesis under the scoring function in use. Formally, for any function score we have: stoppQq ðñ scorepx,ŷq ě scorepx, y 1 q`Upx, y 1 q @y 1 P Q whereŷ is the best complete hypothesis found so far and Upx, y 1 q is the score function-dependent upper bound on how much the score of y 1 can increase as y 1 is expanded further. 7 In this situation, best-first beam search only terminates once no other hypothesis in Q can have a score greater than the best finished hypothesis. We note that use a similar scheme for optimal 7 For monotonic scoring functions, we have Upx, y 1 q " 0. stopping with bounded length normalization. We discuss examples of non-monotonic scoring functions in § 5. A Note on Heuristics. Our analysis shows the equivalence of beam search and best-first beam search, i.e., when hpx, yq " 0. The analysis does not hold for arbitrary admissible heuristics. A poor heuristic, e.g., one that grossly overestimates the future score of continuing down one path, may cause other items to be pruned from best-first beam search that otherwise would have remained on the beam in standard beam search. Runtime Theorem 4.6. The runtime of best-first beam search is Opn max k p|V| logpkq`logpn max qqq Proof. We pop at most n max¨k items. Each pop requires us to push |V| items. Each push requires logpkq time when the priority queue is implemented with a min-max heap (Atkinson et al., 1986) and incrementally pruned so that it has no more than k items. After pushing those |V| items, we have to perform a percolation in the priority queue of priority queues which requiers logpn max q time. This yields Opn max k p|V| logpkql ogpn max qqq time. Theorem 4.7. The runtime of standard beam search is Opn max k |V| logpkqq. Proof. The proof is the same as Theorem 4.6, but we can forgo the percolation step in the queue of queues because standard beam search proceeds in order of hypothesis length. This yields Opn max k|V| logpkqq. While the theoretical bound of best-first beam search has an additional log factor compared to standard beam search, we find this to be negligible in practice. Rather, we find number of calls to score, the scoring function under our model (e.g., a neural network), is often the bottleneck operation when decoding neural networks (see § 6 for empirical evidence). In terms of this metric, the beam search algorithm makes Opkn max q calls to score, as score is called once for each active hypothesis in B and B may evolve for n max rounds. The worstcase number of calls to score will be the same as for beam search, which follows from Lemma 4.3. Even before the findings of Stahlberg and Byrne (2019), it was well known that the best-scoring hypothesis with respect to the traditional likelihood objective can be far from ideal in practice Murray and Chiang, 2018;Yang et al., 2018). For language generation tasks specifically, the results returned by neural models using the standard scoring function are often short and default to high-frequency words (Vinyals and Le, 2015;Shen et al., 2016). To alleviate such problems, methods that revise hypothesis scores to incorporate preferences for longer, less repetitive, or more diverse options have been introduced and are often used in practice. While most such techniques change the scoring function such that it is no longer monotonic, we can still guarantee the k-optimality of the returned hypothesis for (upper) bounded scoring functions using the methods discussed in § 4.1. In the remainder of this section, we present alternate scoring schemes adapted to work with best-first beam search. Additionally, we present several heuristics which, while breaking the k-optimality guarantee, provide another set of decoding strategies worth exploring. Length Normalization. Length normalization is a widely-used hypothesis scoring method that aims to counteract the propensity for shorter sequences to have higher scores under neural models; this is done by normalizing scores by hypothesis length (see Murray and Chiang (2018) for more detail). For early stopping in beam search with length normalization, propose bounding the additive length reward as the minimum of a pre-determined optimal sequence length ratio r and the final sequence length N y : where β is the scaling parameter for the reward. We note, however, that the same can be done with the maximum sequence length n max such that the traditional length reward used by is recovered: score LN px, yq " scorepx, yq`β mintn max , N y u " scorepx, yq`βN y We formally propose two methods for length normalization. We use the scoring functions in (7) or (8) with either: (1) the following heuristic: hpx, yq " # 0 for y.lastpq " EOS β maxtb´|y|, 0u for y.lastpq ‰ EOS (9) where b can be r|x| or n max ; 8 or (2) stopping criterion as in (6) albeit with scoring function score LN and upper-bound function: Despite their similarities, these two methods are not guaranteed to return the same results. While the second method will return the same k-optimal hypotheses as beam search, using a heuristic during pruned search means we can no longer guarantee the k-optimality of the results with respect to the scoring function as the heuristic may push hypotheses off of the beam. We present experimental results for both methods in § 6. Mutual Information. Maximum mutual information decoding (Li et al., 2016) aims to alleviate the inherent preference of neural models for highfrequency tokens when using the log-probability decoding objective. Rather than choosing the hypothesis y to maximize conditional probability with respect to the input x, we instead choose y to maximize pointwise mutual information (PMI): Note that (11) is equivalent to log ppy|xq ppyq , which can be rewritten as log ppy | xq´log ppyq making the objective additive and thus (11) can conform to (4). From this last form, we can see how mutual information decoding penalizes high-frequency and generic outputs; the negative ppyq term, as Li et al. (2016) point out, acts as an "anti-language model." One unfortunate side effect of this objective is that ungrammatical and nonsensical outputs, which have probabilities close to 0 under a language model like ppyq, end up with high scores due to the second term in the score function. To address this problem, and to upper-bound the scoring function, we propose lower-bounding the language model term by a hyperparameter 1 ě ε ą 0. We additionally use the strength hyperparameter λ employed by Li et al. (2016): score PMI px, yq " log ppy | xq λ log maxtppyq, εu (12) 8 We enforce r|x| ă nmax. Similarly to our methods for length normalization, we can use the scoring function in (12) either with the heuristic: hpx, yq " # 0 for y.lastpq " EOŚ λ log εpn max´| y|q for y.lastpq ‰ EOS (13) or with stopping criterion as in (6) albeit with score PMI and upper-bound function: Upx, yq "´λ log εpn max´| y|q Since´λ log ε is the best possible score at any given time step, clearly we can bound the increase in score PMI by the above function. However, as with our length normalization strategy, we lose the k-optimality guarantee with the heuristic method for mutual information decoding. We present experimental results for both methods in § 6. Experiments We run our algorithm on several language-related tasks that typically use beam search for decoding: neural machine translation (NMT) and abstractive summarization (AS). Specifically, experiments are performed on IWSLT'14 De-En (Cettolo et al., 2012), WMT'17 De-En (Bojar et al., 2017), MTTT Fr-En (Duh, 2018), and CNN-DailyMail (Hermann et al., 2015) using both Transformers (Vaswani et al., 2017) and Convolutional sequenceto-sequence models (Gehring et al., 2017). For reproducibility, we use the data preprocessing scripts provided by fairseq (Ott et al., 2019) and follow their methods for training sequence transduction models. Hyperparameter are set in accordance with previous works. Specifically, on IWSLT'14 and MTTT tasks, we follow the recommended Transformer settings for IWSLT'14 in fairseq, 9 which are based on 9 https://github.com/pytorch/fairseq/ tree/master/examples/translation Figure 1: Number of calls to scoring function score vs. total sequence generation time. Each point is a decoded sequence. Colors represent different model architectures and shapes signify the decoding algorithm used (beam sizes 3 and 10 are included for each). There is no notable difference in the overhead (time-wise) of best-first beam search and beam search. Vaswani et al. (2017) and Gehring et al. (2017). Hyperparameters for models trained on the WMT task are set following version 3 of the Tensor2Tensor toolkit (Vaswani et al., 2018). We use byte-pair encoding (BPE; Sennrich et al. 2016) for all languages. Vocabulary sizes for WMT and IWSLT'14 are set from recommendations for the respective tasks in fairseq; for the MTTT tasks, vocabulary sizes are tuned on models trained with standard label-smoothing regularization. Similarly, the CNN/DailyMail dataset is pre-processed and uses BPE following the same steps as (Lewis et al., 2019); model hyperparameters are likewise copied. Details are available on fairseq's website. 10 We use BLEU (Papineni et al., 2002) (evaluated using SacreBLEU (Post, 2018)) for MT metrics and ROUGE-L (Lin, 2004) for abstractive summarization metrics. We build our decoding framework in SGNMT. 11 Running Time In Tab. 2, we report values as the average number of calls to the scoring function per input; we do not use wall-clock time as this is heavily dependent on hardware. See Fig. 1 for empirical justification of the correlation between calls to the scoring function and runtime on the hardware our experiments were run on. For reference, in our experiments, the scoring function took on average ą 99% of the total computation time, even with larger beam sizes, when overhead of the search algorithm is most significant. We find that best-first (BF) beam search leads to significant speed-ups over both traditional beam search and beam search with early stopping, with a performance increase 12 of « 8x for a beam size of 500. We likewise find that best-first beam search offers speed-ups over early stopping methods that are not guaranteed to return the same results as standard beam search (see Tab. 3). Length Normalization We experiment with both forms of length normalization presented in § 5 and provide results in Tab. 4. We find that both methods, i.e., changing the stopping criterion and using a heuristic during search, provide improvements over baseline BLEU scores albeit with different hyperparameter settings; increases are similar to improvements reported by Murray and Chiang (2018). Notably, using a heuristic causes a large percentage of search errors with respect to standard beam search using the same scoring function. However, the difference in results appears to be beneficial in terms of BLEU. Mutual Information We train a language model on the IWSLT dataset and use it to calculate ppyq from (12) as marginalizing over y is intractable (see Li et al. (2016) for further justification). We run experiments using both of the methods discussed in § 5 and present results in Tab. 5. We find that both methods provide results of equivalent BLEU score compared with 11 https://github.com/ucam-smt/sgnmt 12 Performance increase is defined as pold´newq{new Memory Usage We conduct a set of experiments where we limit total queue capacity to k¨γ for γ P t1, . . . , n max u, as described in § 3.3, and report the BLEU score of the resulting set of hypotheses. As shown in Tab. 6, we find that restricting the queue capacity does not harm output quality and additionally, leads to even greater runtime performance increase. For example, runtime for decoding of IWSLT'14 with a beam size of 10 can be improved by ą 3x while returning results with better evaluation metrics. We find that improvements are even more pronounced for larger beam sizes. Across beam widths and tasks, we find that search error (with respect to standard beam search) is quite low for γ " 5. Additionally, for smaller γ, the change in BLEU score demonstrates that search error in this context does not necessarily hurt the Table 5: BLEU scores with mutual information scoring function on IWSLT'14 De-En. Baseline is PMI decoding with unbounded ppyq, i.e., ε " 0. Search error is with respect to beam search decoding of baseline with same β. quality of results. Related Work Our work is most similar to that of Zhou and Hansen (2005), who propose beam stack search. However, they are focused on exact inference and still evaluate hypotheses in breadth-first order. Additionally, their algorithm requires Opn max kq memory; while best-first beam search has the same requirements, we introduce effective methods for reducing them, namely memory-reduced best-first beam search. propose and prove the optimality of an early-stopping criterion for beam search. The authors find in practice though that reduction in computation from their algorithm was generally not significant. We build on this work and introduce additional methods for avoiding unnecessary computation. Our method leads to better performance, as shown in Tab. 2. Klein and Manning (2003) use A˚for PCFG parsing; however, they use the un-pruned version for exact search which is not applicable for NMT or AS as the memory requirements of the algorithm are far too large for these tasks. Subsequently, Pauls and Klein (2009) provide a method for pruning this search algorithm, albeit using a threshold rather than explicitly limiting the state space. Huang et al. (2012) also adapt A˚for a k-best decoding algorithm. While their methods differ notably from ours, they likewise employ pruning techniques that allow for substantial speedups. Stahlberg and Byrne (2019) create an exact inference algorithm for decoding and use it to analyze the output of neural NMT models. While they likewise employ the monotonicity of the scoring function to make their method tractable, they do not focus on speed or mimicking the results of standard beam search. We propose best-first beam search, an algorithm that allows for faster decoding while still guaranteeing k-optimality. We provide results on several sequence-to-sequence transduction tasks that show the speed-ups our algorithm provides over standard beam search for decoding neural models. We adapt several popular alternate scoring functions to bestfirst beam search and provide a framework that can be used to adapt other scoring methods such as coverage normalization or diverse beam search (Vijayakumar et al., 2016). We also provide a memory-reduced version of our algorithm, which returns competitive results in a fraction of the time needed for standard beam search.
8,892.8
2020-07-08T00:00:00.000
[ "Computer Science" ]
Kernel Well-Posedness and Computation by Power Series in Backstepping Output Feedback for Radially-Dependent Reaction-Diffusion PDEs on Multidimensional Balls Recently, the problem of boundary stabilization and estimation for unstable linear constant-coefficient reaction-diffusion equation on n -balls (in particular, disks and spheres) has been solved by means of the backstepping method. However, the extension of this result to spatially-varying coefficients is far from trivial. Some early success has been achieved under simplifying conditions, such as radially-varying reaction coefficients under revolution symmetry, on a disk or a sphere. These particular cases notwithstanding, the problem remains open. The main issue is that the equations become singular in the radius; when applying the backstepping method, the same type of singularity appears in the kernel equations. Traditionally, well-posedness of these equations has been proved by transforming them into integral equations and then applying the method of successive approximations. In this case, with the resulting integral equation becoming singular, successive approximations do not easily apply. This paper takes a different route and directly addresses the kernel equations via a power series approach (in the spirit of the method of Frobenius for ordinary differential equations), finding in the process the required conditions for the radially-varying reaction (namely, analyticity and evenness) and showing the existence and convergence of the series solution. This approach provides a direct numerical method that can be readily applied, despite singularities, to both control and observer boundary design problems. In the early and mid-1990s, inspired by "bifurcation control," wildly popular in the physics community at that time, and linked to interesting applications, Krener and Kang developed control designs for normal forms that include locally quadratic dependencies, which remain the definitive results of the art on this subject. Art Krener cast an eye on PDE control systems at least as early as the mid-1990s, in the framework of a US Air Force funded project on nonlinear control of jet engine instabilities. While other researchers focused on the first-order Galerkin approximations of the models of rotating stall instabilities in axial flow compressors in jet engines, Art formulated and studied generalized higherorder Moore-Greitzer nonlinear models [15]. It is a delight to see Art Krener take on PDE control as the preoccupation for the present stage of his career [20,21]. Going beyond the conventional development of operator Riccati equations, and the limitation to the study of their well posedness, in his trademark fashion, Art is producing computable approximate solutions to Riccati PDEs using Al'brekht's approach, which he has already brought to its state-of-the art form for nonlinear and stochastic ODEs. Introduction In this paper we introduce an explicit boundary output-feedback control law to stabilize an unstable linear radially-dependent reaction-diffusion equation on an n-ball (which in 2-D is a disk and in 3-D a sphere). This paper extends the spherical harmonics [7] approach of [36], which assumed constant coefficients, using some of the ideas of [40]. For a finite number of harmonics, we design boundary feedback laws and output injection gains using the backstepping method [23] (with kernels computed using a power series approach) which allows us to obtain exponential stability of the origin in the L 2 norm. Higher harmonics will be naturally open-loop stable. The required conditions for the radially-varying coefficients are found in the analysis of the numerical method and are non-obvious (evenness of the reaction coefficient). The idea of using a power series to compute backstepping kernels was first seen in [4] (without much analysis of the method itself, but rather numerically optimizing the approximation) and later in [10], where piecewise-smooth kernels require the use of several series. Here, we prove that the method provides a unique converging solution, in the spirit of the method of Frobenius for ordinary differential equations. Some partial results towards the solution of this problem were obtained in [38] and [37] for the disk and sphere, respectively; however they required simmetry conditions. Older results in this spirit were obtained in [34] and [28]. This paper extends and completes our conference contribution [41] where the ideas where initially presented (without proof). To the best of our knowledge, this paper presents the first rigorous proof of convergence of a power series solution for the backstepping kernel equations. Thus, this work consolidates the method as a valid alternative to more traditional numerical approaches, which include finite difference approximations of the kernel equations [23,17,11,3], the use of symbolical successive approximation series [35], or the numerical solution of the integral version of the kernel equations [16,6]. The main advantages of the method are its simplicity (it does not require the sometimes cumbersome conversion to integral equations thus preventing mistakes or any consideration about discrete meshes), speed (modern computing systems can reach high orders of the series in seconds), precision (one reaches a simple polynomial in one variable for the gain at the boundary that does not require interpolation), adaptability (it can be adapted to settings with discontinuous kernels by breaking the domain in pieces, see [10]), and capacity to produce kernels depending on parameters (by symbolically solving the kernel equations). The main drawback is the analyticity requirement of the system coefficients, even though most physical systems and examples seen in backstepping papers indeed posses analytic coefficients, and possibly a slow convergence rate of the series in some cases. The backstepping method has proved itself to be an ubiquitous method for PDE control, with many other applications including, among others, flow control [32,39], nonlinear PDEs [33], hyperbolic 1-D systems [14,5,2], or delays [24]. Nevertheless, other design methods are also applicable to the geometry considered in this paper (see for instance [31] or [8]). The structure of the paper is as follows. In Section 3 we introduce the problem. In Section 4 we state our stability result. We study the well-posedness of the kernels in Section 5, which is the main result of the paper, proving existence of the kernels and providing means for their computation; interestingly, odd and even dimensions require a slightly different approach. We briefly talk next about the observer in Section 6, but skip most details based on its duality with respect to the controller. Then, we give some simulation results in Section 7. We finally conclude the paper with some remarks in Section 8. n-D Reaction-Diffusion System on an n-ball Consider the following constant-coefficient reaction-diffusion system in an n-dimensional ball of radius R: where u = u(t, x), with x = [x 1 , x 2 , . . . , x n ] T , is the state variable, evolving for t > 0 in the n-ball B n (R) defined as with boundary conditions on the boundary of B n (R), which is the (n − 1)-sphere S n−1 (R) defined as The boundary condition is assumed to be of Dirichlet type, where U(t, x) is the actuation. On the other hand the measurement y(t, x) is defined as where ∂ r denotes the derivative in the radial direction (normal to the (n − 1)-sphere), which would be defined as ∂ r u(t, x) = ∇u · x x . Differently from [36], we consider non-constant λ( x) verifying the following assumption. is an analytic function of x and depends exclusively on the radius r = x . Following [36], both the state and the actuation variable can be written in n-dimensional spherical coordinates, also known as ultraspherical coordinates (see [7], p. 93), which consist of one radial coordinate r and n − 1 angular coordinates θ. Then, using a (complex-valued) Fourier-Laplace series of Spherical Harmonics 1 to handle the angular dependencies, defined as where N(l, n) is the number of (linearly independent) n-dimensional spherical harmonics of degree l, given by N(0, n) = 1 (representing the mean value over the n-ball) and, for l > 0, with Y n lm being the m-th n-dimensional spherical harmonic of degree l. The coefficients in (6)-(7) are possibly complex-valued. Following [36] and using (6)-(7) one reaches the following independent complex-valued 1-D reaction-diffusion equation for each spherical harmonic coefficient: evolving in r ∈ (0, R], t > 0, with boundary conditions In these equations, we have considered Dirichlet boundary conditions. The measurement would be the flux at the boundary, namely ∂ r u m l (t, R). Note that following [12, p. 640], a second boundary condition, reflecting the second-order character of (9) and the need to avoid singular behaviors, can be expressed as: In the above equations, the integers m and l stand for the order and degree of the harmonic, respectively. Note that the higher the degree (corresponding to high frequencies), the more "naturally" stable (9)-(10) is, as seen next. Define the L 2 norm and the associated L 2 space as usual, where | f | 2 = f f * , being f * the complex conjugate of f . Lemma 3.1. Given λ(r) and R, there exists L ∈ N such that, for all l > L, the equilibrium u m l ≡ 0 of system (9)- (10) is open loop exponentially stable, namely, for U m l = 0 in (10) there exists a positive constant D 1 , such that for all t u m l (t, ·) L 2 ≤ e −D 1 t u m l (0, ·) L 2 . D 1 is independent of l, and only depends on n, λ(r), ε, and R, and can be chosen as large as desired just by increasing the values of L. The proof is skipped as it mimics [40] just by using the L 2 norm as a Lyapunov function and Poincare's inequality. Thus one only needs to stabilize the unstable mode with l < L. Since the different modes are not coupled, it allows us to stabilize them separately and re-assembling them. Moreover since only a finite number of harmonics is stabilized, there is no need to worry about the convergence of the control law as in [36], with its Spherical Harmonics series being just a finite sum. Our objective can now be stated as follows. Considering only the unstable modes, design an output-feedback control law for U m l using, for each mode, only the measurement of ∂ r u m l (t, R). Our design procedure is established in the next section along with our main stability result. Stability of controlled harmonics Next, for the unstable modes we design the output-feedback law. The observer and controller are designed separately using the backstepping method, by following [36]; in this reference it is shown that both the feedback and the output injection gains can be found by solving a certain kernel PDE equation, which is essentially the same for both the controller and the observer. Thus, for the sake of brevity and to avoid repetitive material, we only show how to obtain the (full-state) control law, giving the basic observer design and some additional remarks later in Section 6. Design of a full-state feedback control law for unstable modes Based on the backstepping method [23], our idea is utilizing an invertible Volterra integral transformation where the kernel K n lm (r, ρ) is to be determined, which defined on the domain T k = {(r, ρ) ∈ R 2 ; 0 ≤ ρ ≤ r ≤ R} to convert the unstable system (9)-(10) into an exponentially target system: where the constant c > 0 is an adjustable convergence rate. From (14) and (16), let r = R, we obtain the boundary control as the following full-state law Following closely the steps of [36] to find conditions for the kernels, and defining K n lm (r, ρ) = G n lm (r, ρ)ρ ρ r l+n−2 , we finally reach a PDE that the G-kernels need to verify: with only one boundary condition: We assume as usual that these kernel equations are well-posed and the resulting kernel is bounded in T ; this will be analyzed later in Section 5, providing also a numerical method for its computation. Closed-loop stability analysis of unstable modes To obtain the stability result of closed-loop system, we need three elements. We begin by stating the stability result for the target system. We follow by obtaining the existence of an inverse transformation that allows us to recover our original variable from the transformed variable. Then we relate the L 2 norm with spherical harmonics. With these elements, we construct the proof of stability mapping the result for the target system to the original system. This is done by showing that the transformation is an invertible map from L 2 into L 2 . We first discuss the stability of the target system, having the following lemma: where the constant D 2 is independent of n, l or m, and only depends on c, ε, and R; it can be chosen as large as desired just by increasing the value of c. Proof. Consider the Lyapunov function: then, taking its time derivative, we obtaiṅ we then obtain, independent of the value of n, thus proving the result. For |l|≤ L, let c be chosen as in Lemma 4.1, and assume that the kernel K n lm (r, ρ) is bounded and integrable. The system (9) with boundary control (17) is closed-loop exponentially stable, namely there exists positive constants C and D 2 such that C and D 2 are independent of m or l, and only depend on n, L, λ(r), ε and R. Proof. The proof consists of two parts, one is existence of an inverse transformation, and then showing the equivalence of norms of the variables u n lm and w n lm ; the result then follows from the stability of the target system. As shown in [36], when K n (r, ρ) is bounded and integrable, the map (14) is invertible and its inverse transformation is which is also bounded and integrable. Call nowK andL the maximum of the bounds of the functioň K n lm andĽ n lm for a given n and all l ≤ L in their respective domains. It is easy to get where Combining then Lemma 4.1 with the norm equivalence between u m l and w m l system stated as in (27) and (28), it is easy to obtain Let C = √ M 1 M 2 , the result then follows. Note that combining Lemmas 3.1 and 4.2 and taking D = min{D 1 , D 2 }, we get the following stability result for all spherical harmonics and thus the full physical system. Theorem 1. Under the assumption that the kernel K n lm (r, ρ) is bounded and integrable, the equilibrium u m l ≡ 0 of system (9)-(10) under control law (17) is closed-loop exponentially stable, namely, there exists a positive constant D, such that for all t where D can be chosen as large as desired just by increasing the value of L and c in the control design process. Well-posedness of the kernel equations Next, we state the main result of the paper, which was in part assumed in Theorem 1, also giving the requirements for λ(r). In addition the proof of the result also provides a numerical method to compute the kernels, which is an alternative to successive approximations which do not work in this case (due to the singularities at the origin; see for instance [38] to see the resulting singular integral equation that needs to be solved). Theorem 2. Under the assumption that λ(r) is an even real analytic function in [0, R], then for a given n > 1 and all values of l ∈ N, there is a unique power series solution G n lm (r, ρ) for (18)- (19), even in its two variables in the domain T , which is real analytic in the domain. In addition, if λ(r) is analytic, but not even, then there is no power series solution to (18)- (19) for most values l ∈ N. The requirement of evenness for λ(r) might seem unusual. However, if we carefully consider Assumption 3.1, and since r = x = x 2 1 + x 2 2 + . . . + x 2 n , in physical space λ( x) will be nonanalytic, unless it is even. Thus, while solutions to the kernel equations might exist for non-even λ(r), we cannot expect them to be analytic. This result notwithstanding, if one is interested in controlling only very low-order harmonics, kernels do exist without this requirement, as shown in [38,37], which only consider the 0-th order harmonic (the mean) respectively for a disk and a sphere, and only require boundedness of λ(r). Proof of Theorem 2 We start by giving out an algorithmic method to compute the power series for G n lm (r, ρ), which will allow us to prove Theorem 2 as well as numerically approximating the kernels. First of all, we show that the evenness of λ(r) is a necessary condition to find an analytic solution. Next, it is possible to establish that the series for G n lm (r, ρ) only has even powers. Exploiting this property to suitably express (18)- (19), we finally show the existence of the power series and thus Theorem 2 follows. Convergence and related issues (radius of convergence) is studied towards the end, finishing the proof. Computing a power series solution for the kernels Starting from the most basic assumption of Theorem 2, we consider that λ(r) is analytic in [0, R], therefore it can be written as a convergent series (encompassing c and ε for notational convenience): which, by the evenness of λ, may only contain even powers 2 , this is, λ i = 0 if i is odd. We then, in the spirit of the method of Frobenius for ordinary differential equations, seek for a solution of (18)- (19) of the form: where the dependence on n, l and m has been omitted for simplicity (the solution will depend on these values). The series in (32) collects together (in the parenthesis) all the polynomial terms with the same degree. It is easy to see that the boundary condition (19) implies: which in particular implies C 00 = − λ 0 2ε . On the other hand, the left-hand side of (18) becomes where we have defined Finally, to express the right-hand side of (18), denote γ = n + 2l − 2 ≥ 0 and define the operators and thus, rewriting the sum to be homogeneous with (34),we find where, (assuming Equating (38) and (34), we obtain a system of equations: With λ(r) and n are fixed, we want to show that the kernel equations are solvable for all values of l ∈ N. Thus, γ takes increasing values. In addition we can assume γ = 1, since the case n = 3, l = 0 was already addressed in [37] showing that it reduces to the usual 1-D kernel equations for parabolic systems [23], which admits a power series solution according to [4]. The first two equalities, if γ = 1, imply and, in particular, C 10 = C 01 = 0, whereas the second equality results in a system of equations that needs to be solved recursively, starting at i = 0. It can be rewritten as follows to start at i = 2 (since C 00 , C 10 and C 01 are already determined). Note that for each i ≥ 2, there are i + 1 coefficients in (32) but i + 2 relations: one from (33), two from (42) and i−1 from (43). Thus, it would seem that (33)-(42)-(43) is in general an incompatible system. This is indeed the case if λ(r) is not even, i.e., if the series (31) contains odd powers, as shown in the next section. Evenness requirement of λ(r) We start with the following result. Lemma 5.1. If λ(r) is not even, then there are values of l ∈ N for which there is no solution to (18)- (19) in the form of (32). Proof. We show that, if there exists i odd such that λ i = 0, then there is no solution in the form of a power series. First, if λ 1 = 0, then from (33) we know that C 01 +C 10 = − λ 1 4ε , however since form (42) one has C 01 = C 10 = 0, this cannot hold. Consider now there is indeed a value i > 1 for which a coefficient λ i is distinct from zero and let us show the result by contradiction. Consider the first such i. Now, since the right-hand side of (43) depends on C (i−2) j , one gets that for all odd i < i C i j must zero from (33)-(42)-(43) all having a zero right-hand side (this can be formalized with an induction argument; we skip the details). Thus, at i, the following system of equations has to be verified: and for 0 ≤ j ≤ i − 2, Let us consider l sufficiently large such that γ > i, so that the coefficient (46) is distinct from zero in the full range of j, namely 0 ≤ j ≤ i − 2. Then none of the coefficients in (46) is zero. Therefore, combining (44) with (46), from C i1 we can find C i3 , then C i5 , and so on. Similarly, from C i(i−1) we can find C i(i−3) , C i(i−5) and so on. These two sequences don't overlap because i is odd and therefore, one finds C i j = 0 for all 0 ≤ j ≤ i which is not compatible with (45) unless λ i = 0, which contradicts our initial assumption. Next we show that evenness of λ implies evenness of the kernels. Proof. We need to prove that C i j = 0 if either i or j is odd. From the proof of Lemma 5.1, we directly know that for odd i one has C i j = 0. Fix, then, i even and consider j odd; for i = 2, the result is obvious. Assuming C i j = 0 for all even numbers i < i and j odd, let us prove the result by induction on the first coefficient. As before, we would need to solve (45)-(43). The right-hand side (43) is zero as in (46) by the induction hypothesis (if k even) or directly zero if k odd. Then, following again the proof of Lemma 5.1, we have the same system of equations (45)-(46) for our even i and odd j's. Now: so starting from C i(i−1) = 0 we find C i(i−3) = 0, then C i(i−5) , and so on; however, with i being even, this sequence ends now in C i1 (thus, the proof of Lemma 5.1 does not apply because the sequences starting at C i1 and C i(i−1) overlap). Thus, one finds C i j = 0 for all odd values of j between 1 and i − 1. Well-posedness of the coefficient system Next, we show that the coefficients of the power series can always be found, which by the previous lemmas only requires studying the even coefficients. For simplification, we redefine (31) and (32) as: without bothering to redefine the coefficients (note that (35) does not require any change). Defining as well γ = γ 2 = n 2 + l − 1 ≥ 0, the new system of equations to be solved is and (49) Let us outline the solution procedure, and later derive some conclusions. Solving in (49) every C i j as a function of C i( j+1) we get: which can be written more briefly if we define, for i > 0 and 0 ≤ j < i, as To be able to simplify a bit the equation, redefinê then, and iterating this equality until reaching C ii , we get and inserting this into (48), we reach an equation for C ii , namely where and It is quite clear that these κ(i, γ ) will play an important role; in particular, if they are non-zero, one can always find a unique solution for the coefficients C i j . Thus one needs to show that κ(i, γ ) = 0 for any possible i or γ . The following lemma shows this is indeed the case, by exploiting a connection of the a i j coefficients with Gauss' hypergeometric functions. Proof. Recalling from (56) and (51) the definitions of κ(i, γ ) and a i j , respectively, one has which can be rewritten in terms of binomial numbers and rising/falling factorials 3 [18] as and reordering the sum and using i j where the obvious fact (x) j = (−1) j (−x) j has been used. Consider now the finite polynomial p i (x, γ ) defined as From the definition of Gauss' hypergeometric function [1, p.561], denoted as 2 F 1 (a, b; c; x), in the polynomial case (a or b non-positive integer) and noting (−1) j i j = (−i) j j! , it is immediate that and therefore, from Gauss' summation theorem [1, p.556], which is applicable in this case since 1 + 2i > 0, finishing the proof. The next result is an immediate conclusion of the positivity of κ(i, γ ): Lemma 5.4. If λ(r) is even, then, for all values of l ∈ R, the coefficientes in (47) that solve (18)- (19) can be uniquely found up to any order i. To conclude the proof of Theorem 2, we need to prove analyticity of the series (47). This step, however, requires splitting the problem in two possible cases: odd dimension (thus, γ = n/2 + 2l − 1 is not an integer) and even dimension (γ integer). Proof of analyticity for odd dimension In the odd-dimension case, define the following coefficients: with L i0 defined as 1; these are well-defined given that γ is non-integer. They can also be expressed as Now, in (48)-(49), denote C i j = L i jČi j . Replacing in the recurrence we get Define nowB and the new set of recurrence equations forČ i j becomes rather simple: and the recurrence is easily solvable in terms of one element; for instance,Č ii : for j = 1, . . . , i. Replacing in (68) we reach Call Now, solving for the remaining coefficients from (70): Finally, recovering the coefficients C i j from C i j = L i jČi j and using (67): which is quite explicit. Notice that since ρ ≤ r, thus, defining α i = ∑ i j=0 |C i j |, if we can prove that ∑ ∞ i=0 α i r 2i converges for a certain radius of convergence R, so does G n lm (r, ρ) for ρ ≤ r ≤ R, and thus we obtain the required analyticity. Now: To prove the convergence of the power series ∑ ∞ i=0 α i r 2i consider the following lemma, inspired by [25]: Lemma 5.5. Consider g(x) = ∑ ∞ i=0 g i x 2i and h(x) = ∑ ∞ i=0 h i x 2i analytic functions, both with radius of convergence R. Let i 0 be a nonnegative integer, let (a i ) ∞ i=0 be a sequence of real numbers, and define f (x) = ∑ ∞ i=0 a i x 2i , where a i verify, for i > i 0 where the sequences b i , c i ≥ 0 are decreasing for i > i 0 , with c i also verifying lim i→∞ c i = 0. Then, f (x) is analytic with radius of convergence at least R. Proof. Since g and h analytic with radius of convergence R we can write |g i |, |h i | ≤ MR −2i , where the definition as power series of squares has been taken into account. Thus: Defineǎ i = a i for i ≤ i 0 and, for i > i 0 ,ǎ i = b i MR −2i + c i ∑ i−1 j=0ǎ j MR −2i+1+2 j . Obvioulsy a i ≤ǎ i and therefore the radius of convergence of f (x) would be at least the radius of convergence of f (x) = ∑ ∞ i=0ǎ i x 2i . Now: where the inequality holds for sufficiently large i > i 0 and thus in the limit, therefore proving the lemma. To apply Lemma 5.5 to (76) we need to bound some of the terms. In particular, if we are able to find b i and c i such that we get Thus, assuming that b i and c i verify the conditions given in Lemma 5.5, and given that λ(x) has a radius of convergence of at least R, we see that G n lm (r, ρ) converges and defines an analytic function for ρ ≤ r ≤ R, thus proving Theorem 2 for the odd-dimension case. It remains to find such b i and c i . Proceeding exactly as in Lemma 5.3 with a slight modification, we directly find Now, let N = γ − 1/2 and i > 2N. One can see that for 1 ≤ j ≤ N, and Thus, for i > 2N, calling d i the following sequence it is clear that d i is a decreasing sequence, since from the ratio test r = lim i→∞ d i+1 It is obvious that b i is decreasing, since d i is decreasing. Now we need to find a sequence c i for the second term in (81). First of all, The following lemmas are needed to find a bound to (88). Proof. Consider the ratio It is easy to see that Thus, the sequence always increases as long as j ≤ N, and we can look for a maximum j * > N. Then, for j > N, denote the ratio of (89) by f : Manipulating the expression, we find (i 2 − 1) − 2 j(i + 1) + γ (i + 1) ≤ 0 and canceling the term (i + 1) the following inequality for j is reached: Therefore, if (and only if) the bound given by (92) is verified, Therefore, we conclude that the maximum of the sequence |L i j | is reached at thus finishing the proof. Thus, we are left with showing that |L i j * | |R i | is decreasing, which is expected, since L i j * is one of the elements that appear in the sum R i . Using the expression (83) and the formula for L i j that involves the Gamma function, we obtain the following: Now, the decreasing character of the sequence is established as follows (consider the case where i + N is odd, so that j * = i−1+N 2 ; the even case is analogous). Consider Stirling's approximation to the factorial, namely n! ≈ √ 2πn n e n . Then: Putting together (95)-(96), we obtain the following. where we have broken the approximation into three functions: Notice that clearly lim i→∞ f 1 (i) = 0 (since f 1 (i) behaves like O(1/i) for large i), lim i→∞ f 2 (i) = 1, and lim i→∞ f 3 (i) = 16, thus it only remains to compute lim i→∞ f 2 (i) i , which is an indeterminate of the kind 1 ∞ . Resolving it (the details are omitted for brevity) one obtains that the limit is indeed 1. Thus, it is possible to find the decreasing sequence c i in (81), concluding the proof of convergence and analyticity in odd dimension. Proof of analyticity for even dimension The fact that γ is an integer makes the odd approach a priori impossible, since (65) would not be well defined (it would contain divisions by zero). However, to overcome that difficulty, we employ a partial solution for the kernel equations, to the order γ − 1, which helps to regularize the problem. For this, consider F(r, ρ) = ∑ γ −1 i=0 r 2i φ i (ρ 2 ). Replacing this function in (18) and one gets the following recursive set of ODEs: which is solved starting at i = γ − 1: This can be written as which is an ODE with a regular singular point at x = 0. By applying the Frobenius method [13,Chapter 36] one can rewrite this equation as and its indicial equation is r(r − 1) + (1 + γ /2)r = 0, thus r 1 = 0 and r 2 = γ /2 (non-integer). We are interested in the solution of the form φ γ /2 = ∑ ∞ i=0 a i ρ 2i and discard the other solution. By Fuchs' theorem [9, p.146] this solution is analytic where λ(x) is analytic, thus the radius of convergence of the resulting φ γ −1 (ρ 2 ) is greater than one. Next, for i = γ − 2 up to i = 0: which, has the same indicial equation and again, also admits a solution in the required form. Applying once more Fuchs' theorem, this solution is analytic in intervals where both λ(x) and φ i+1 are analytic. Thus, by induction, we find a family of solutions such that the radius of convergence of all φ i is greater than R. The solutions just found have a degree of freedom (the first coefficient a i of their power series, which is φ i (0)). The idea is to construct the solution such that the boundary condition G n lm (r, r) = H(r) is satisfied up to order 2γ − 2. Thus: F(r, r) = ∑ γ −1 i=0 r 2i φ i (r 2 ) and expanding in power series φ i (r 2 ): It can be shown that this scheme produces valid initial values for the φ i 's. However, an easier approach is to follow the general series approach of Section 5.1.1 up to order i = γ −1. By the uniqueness of the series development and identifying coefficients, it can easily be shown that φ i (0) = C ii . Next, calling G n lm (r, ρ) =Ǧ n lm (r, ρ) + F(r, ρ) the new boundary condition for the PDE becomes: G n lm (r, r) = H(r) − F(r, r) which starts at order 2γ . Thus, we can proposeǦ n lm (r, ρ) = r γ F 2 (r, ρ). One can see that the PDE for F 2 is and following previous sections, calling ψ(r 2 ) = H(r)−F(r,r) r 2γ and abusing the notation by keeping the same name for the coefficients C i j , one can find a power series development F 2 (r, ρ) = Now the approach of Section 5.1.4 becomes applicable and even easier, since all coefficients are positive. Indeed, define Mimicking Section 5.1.4 we reach where the last step can be performed due to the positivity of the redefined coefficients L i j compared to Section 5.1.4. Again, we apply Lemma 5.5 to (110). In this case, we define which is already a decreasing sequence. Then we need to find c i such that max r∈{0,...,i−1} and c i should be shown to be decreasing (for sufficiently large i) and convergent to zero. Consider the following lemma. Lemma 5.8. Consider L i j as defined in (109) and R i , R i j as defined in (109). Then: the first property becomes evident, whereas the second property is immediate from the first since For the third property, note that The fourth property is obvious, noting that L i j ≤ L i( j+1) for j < i/2. Finally, for the last property, first note that it is only required to study 0 ≤ r ≤ i/2 given the third property. Now: and since this is an increasing function of r for 0 ≤ r < i, we can bound it by its value at r = i/2, thus proving the final property. Therefore, setting c i = 4 i+2 i(i+2γ ) , a sequence that decreases to zero, we can apply Lemma 5.5 to (110) and follow the same steps as in Section 5.1.4 to obtain the result of Theorem 2 for the case of even dimensions. which can be written as 0 = λ(r) + ε∂ r P n lm (r, r) + ε d dr (P n lm (r, r)) + (n − 1) εP n lm (r, r) r + ε∂ ρ P n lm (r, r) − (n − 1) εP n lm (r, r) r . (124) Following [36], and after some computations, we reach the boundary conditions for the kernel equations as follows: It turns out that the observer kernel equation can be transformed into the control kernel equation, therefore obtaining a similar explicit result. For this, defině P n lm (r, ρ) = ρ n−1 r n−1 P n lm (ρ, r), and it can be verified that the equation now governingP n lm (r, ρ) is exactly the equation satisfied by K n lm (r, ρ). ThusP n lm (r, ρ) = K n lm (r, ρ) and we can apply our previous result of Section 5. The observer error dynamics has the same stability properties derived in Section 4 for the closed-loop system under full state control. As in the controller case, only a limited number of modes need to be estimated; namely, those that are not naturally stable by the Lemma 4.1, this being the main difference from the result given in [36]. Finally, the controller-observer augmented system can be proved closed-loop stable as in [36], using the separation principle given the linearity of the system, with desired convergence rate, and without much modification; we skip the details, which requires going up to H 1 stability, as in [36]. Implementation and simulation study In this section, the simulation experiment on a three-dimensional unity ball (n = 3, R = 1) is taken as an example to illustrate the effectiveness of the proposed control, and some implementation remarks. The system with the output feedback control law is simulated on 0 ≤ t ≤ 2 s with the following parameters: ε = 1, λ(r) = 10r 4 + 50r 2 + 50, c = 3. We consider that the system initially has the random quantity u 0 ∈ [0, 10], and the observer's initial condition is set as the actual state plus an error of normal distribution with zero mean and σ 2 = 0.5. Fig. 1 shows the plots of the polynomial approximation of kernels K 3 lm , which is obtained by first expanding λ(r)+c ε by using (31), and then finding the coefficients of (32) up to a cutoff in the p-th powers by solving recursively (39)-(41) for each i up to p; in each step one needs to compute the coefficients B i j given by (35) from the previously-found coefficients C i j . The value of K does not depend on m, so we omit this subindex, and l is varied from zero to the value given by the Lemma 3.1. The value of p is chosen as p = 15. Applying the Lemma 3.1, one can obtain l to be 11; however, here, to save space, we only show the first six approximate numerical solutions of the control gains. As shown in Fig. 1, we find that K l becomes increasingly smaller as l increases. In order to avoid a dramatic increase in the complexity of simulation caused by the high dimension, in our simulations, we employ a method also based on spherical harmonic expansions, which greatly reduces the error. Thus, we only calculate the harmonics u m l that only need discretization in the radial direction, and then we sum up a finite number S of harmonics to recover u. When S > 0 is a large enough integer, the error caused by the use of a finite number of harmonics is much smaller than the angular discretization error. Thus, the simulation is carried out using the formula which is a truncated variant of (6), where the spherical harmonics are defined as with P 3 lm the associated Legendre polynomial defined as Fig. 2(d) and Fig. 3(e), respectively. Note that in these figures, the ranges of color bars are different and thus avoid too uniform colors in Fig. 3. When the open-loop and closed-loop evolution is compared directly, the validity of the proposed method is illustrated more intuitively. Fig. 3(f) shows the L 2 norm of the observation error, from which it can be found that the system begins to converge to its zero equilibrium after the observation error has already settled to zero as well. The evolutions at different layers, namely r = 0.002, r = 0.3, r = 0.5, and r = 0.8, are shown in Fig. 4 (a)-(c), and the observer errors are presented in Fig. 4 (b), (d). For clarity, only the first 0.4 s of the response are shown here. Fig. 5 shows the control effort at the boundary. It can be seen that the system driven by the proposed boundary control eventually converges after a short-term fluctuation. Conclusion We have shown a design to stabilize a radially varying reaction-diffusion equation on an n ball, by using an output-feedback boundary control law (with boundary measurements as well) designed through a backstepping method. The radially varying case proves to be a challenge, as the kernel equations become singular in radius; when applying the backstepping method, the same type of singularity appears in the kernel equations, and successive approximations become difficult to use. Using a power series approach, a solution is found, thus providing a numerical method that can be readily applied to both control and observer boundary design. In addition, the required conditions for the radially varying coefficients (analyticity and evenness) are revealed. This result can be extended in several ways. If one has Neumann boundary conditions at the controlled boundary (which implies then that one is measuring the state at the boundary instead of its normal derivative), or even Robin boundary conditions, the method can be extended straightforwardly, since the transformation itself does not change and, therefore, the backstepping kernels remain the same. Only the particular control/observer gains, deduced from the backstepping kernels, would change; as well as a small modification on Lemma 4.1 to account for the change in the boundary conditions. In practice, this result can be of interest for the deployment of multi-agent systems, following the spirit of [29]; thus, the radial domain mirrors a radial topology of interconnected agents that follow the reaction-diffusion dynamics to converge to equilibria that represent different deployment profiles. Since the plant can be chosen as desired (thereby setting the behavior of the agents), the use of analytic reaction coefficients is not actually a restriction, but opens the door to richer families of deployment profiles compared to the constant coefficient case of [29]. However, the theoretical side of the result needs to be further investigated; an avenue of research that can be explored is the relaxation of the analyticity hypothesis by using reaction coefficients belonging to the Gevrey family; the kernels can then be analyzed to verify if they are still analytic, or rather Gevrey-type kernels, or simply do not converge. Also, the rate of convergence of the obtained power series is of interest and shall be explored. We have experimentally observed that the rate of convergence of the series representation of λ(r) has a considerable influence on it. In addition, one could also explore how fast the series converges in the case of constant λ, since an explicit solution is known from [36]. In particular, the worst case in a domain with radius R would be given by the convergence rate of the Maclaurin series of I 1 λ ε R where I 1 is a modified Bessel function of order 1. Since this function behaves quite closely to an exponential if its argument is large (which would be the case with slowest convergence), the number of required terms would be given by the remainder of the power series of an exponential. In that case, it is easy to see that the size of the term λ ε R would define the required truncation level. If we extrapolate this behavior, then, beyond the convergence rate of the series representation of λ(r), we can say that higher values of R and max r∈[0,R] |λ(r)| and lower values of ε would result in slower-converging series; coincidentally, these are exactly the same factors that would result in a more unstable open-loop plant.
10,425.8
2023-07-01T00:00:00.000
[ "Engineering", "Mathematics" ]
Using CUBES strategy in a remote setting for primary mathematics word problems Various research has been carried out worldwide over the years to identify ideal methods that are helpful to pupils when solving mathematical word problems. This study aims to examine the use of the CUBES Maths Strategy, a mnemonic device, to solve word problems and was conducted in a remote setting. An action research approach using a mixed method research was conducted where all data collected were analysed both quantitatively and qualitatively. The participants involved were pupils from a small local government primary school, aged between 8 and 9. Pupils’ test results from the given pre and post-tests were quantitatively analysed using Wilcoxon Signed-Rank Test, which concluded that there was no significant change in the difference in test scores. Newman’s Error Analysis interview was conducted to investigate the source of errors committed by the pupils, which concluded that the most prominent type of error made is the Comprehension error, followed by the Transformation error. From the observations and reflections, it can be deduced that, as the research was done in a remote setting, the use of the CUBES Maths Strategy was not fully utilised. These results could be based on the interactions between teachers and students during remote online learning. Introduction The outbreak of Covid, which is an infectious disease caused by the SARS-CoV-2 virus that is highly transmittable (WHO, 2020), has disrupted education worldwide. The traditional physical classroom setting was changed to an online setting, where pupils learn from the comfort of their individual homes while following the social distancing and physical distancing policies. This is similar to Brunei Darussalam (hereafter, referred to as Brunei)'s response to the COVID situation, where task workforce and school students were practising the Work from Home or Learning Online procedures, respectively. However, in the Third Wave, between January 2022 to March 2022, the pandemic has lessened, and parents are gradually going to work while school children in the primary sectors and below continue to be at home for online learning due to physical school closure. Thus, asynchronous online lessons were carried out due to the lack of parental supervision and the limitation of personal devices. The purpose of this research is to investigate the use of the CUBES mathematics strategy in an asynchronous online lesson for primary pupils. This strategy is a mnemonic device that allows one to break down a given word problem into smaller parts which aids one in further understanding and better comprehension of the passage. From the authors' past experience, it was observed that pupils were unable to comprehend and identify the steps to take when given a mathematical word problem. Hence, the research sought to discover whether the use of this strategy will affect the performance of primary pupils when solving mathematical word problems. Mathematics, being one of the core subjects which starts from the early years of education, is found to be one of the most challenging subjects worldwide. Similarly, in Brunei, for several decades, not only are the pupils struggling to solve mathematical problems, but the teachers are also experiencing difficulties in the teaching and learning process of the subject (MoE, 1993). There were growing concerns regarding the drop in pupils' mathematical achievements and their lack of motivation for learning the subject (Majeed et al., 2002;Mundia & Metussin, 2019), which is why various research are continuously being carried out to find the ideal solution to address these problems. The study of Gafoor and Kurukkan (2015) concluded that mathematics is considered a difficult subject due to the aversive teaching styles, difficulties in following the instructions, understanding the subject, and remembering the equations and ways to solve the problem. Word Problems is one of the main topics in the syllabus that many pupils struggle with. The difficulties of word problems are influenced by the numerical factors, the complexity of linguistic factors, and the interrelation between the two (Daroczy et al., 2015). Gooding (2009) conducted a study and compiled five categories of difficulties that pupils may face while solving mathematical word problems: reading and understanding the language used, recognising and imagining the context, forming a number sentence, carrying out the mathematical calculations, and interpreting the answer. Most of the pupils who are struggling in solving mathematical word problems were found to have difficulties in reading the word problem and are unable to comprehend it. This is because they might not fully possess the conceptual knowledge required to solve the problems correctly (Cummins et al., 1988). They tend to misinterpret the given keywords, which leads them to identify the wrong mathematical operation to be used. This commonly happens when the given word problem seems too lengthy to the pupils. Due to the lack of textual understanding, instead of taking time to comprehend the passage carefully, the pupils were seen to be skimming through the passage and trying to identify the keywords that would alert them on which operation was to be used (Pungut & Shahrill, 2014). Especially in the case of pupils from monolingual backgrounds with little or close to no exposure to the English Language (Jones, 2016), they are struggling and trying to comprehend what the passage meant due to the fact that Mathematics is taught using the English Language. Although Yusof (2003) reported that there is no correlation between comprehension and transformation in word problems with language, Cummins et al. (1988) stated that in order to solve word problems, asides from mathematical computation, other kinds of knowledge, which include linguistic knowledge are required for understanding and comprehending the problems. It is crucial to examine the issues regarding word problems because not only is Word Problems a vital topic in Mathematics, but it also plays a huge role in education as it offers practice for everyday situations where pupils will apply different skills to solve problems and use mathematical modelling (Verschaffel et al., 2020). The pupils are not just simply solving a mathematical word problem but are enhancing their mental skills at the same time where they will be able to strengthen their problem-solving skills, improve comprehension and analysis, and build logical and critical thinking skills. They will also slowly be able to relate word problems to their everyday life scenarios and be able to apply the learned knowledge and skills effectively in different situations encountered in daily life (Dewolf et al., 2014). Hence, finding the reason why pupils are having difficulties in solving word problems and identifying the solution would highly contribute to lessening the burden for both the teachers and pupils. There is hardly any academic literature found to date regarding this strategy but seems to have been created and recommended by teachers, as it can be found on teachers' websites. To date, a research dissertation involving CUBES strategy with special education children was done by Tibbitt (2016). Therefore, the research aims that by introducing the CUBES Maths Strategy, the pupils will be able to make use of the mnemonic device that provides pupils with an actionable step-by-step procedure that enables them to pick apart and comprehend what is being asked in the story problem. Each letter of CUBES represents an action to be carried out when solving word problems and assists the pupils in narrowing down what to focus on in stages. 1. C is for circling the numbers present in the word problem. 2. U is for underlining the question found within the story problem. 3. B is for boxing the keywords or operation clues 4. E is for examining the word problem with three "What" questions i) What label or units will my answer be? ii) What information have I obtained? iii) What information do I need? 5. S is for solving and checking if the answer makes sense Step-by-step process 1. Circle all the numbers that are in the given word problem Tibbitt (2016) conducted a study comparing the effectiveness of two different problemsolving strategies, Solve It! and the CUBES Maths Strategy, for special education pupils in the general education classroom. The difference between Tibbitt's study and this study is that this study solely focuses on the use of the CUBES Maths Strategy, which is worth to be studied as there is a lack of published studies to date regarding the use of this strategy to solve mathematical word problems. Methods Research is commonly done nowadays to better understand and make sense of the world's complexity, and the techniques used depend on the problem of the study. An action research approach utilising mixed method research methodology was adapted for this study, where a combination of theory and practice is put together to form a cycle of activities (Avison et al., 1999). The data collected through this process were analysed quantitatively and qualitatively to provide an in-depth understanding of the research problem. The cycle starts with identifying a problem, developing and carrying out the action intervention, then interpreting and reflecting on the consequences. The action research spiral by Lewin (1946), illustrated in Figure 7, consists of four main phases: Planning, Acting, Observing, and Reflecting, which best explains this cycle. The researcher identified that pupils are struggling in solving mathematical word problems and made plans to help them overcome the difficulties and reduce their errors by introducing the CUBES Maths Strategy. The research started with a written pre-test to assess the pupils' prior knowledge and the method they use to solve word problems. It was then followed by two rounds of intervention lessons, where solving word problems we re-taught by introducing the use of the CUBES Maths Strategy. A written post-test was then conducted after the intervention lessons to measure the pupils' achievement as well as to study the effectiveness of the newly taught method. Questions in the post-test were slightly different from the pre-test but were of the same concept. For example, the first question in both papers are word problems on the addition between a 5-and 4-digit number but they are written in different scenarios. The following questions are in the same format. This ensures that pupils are not just recalling from memory to solve the same word problem but also trying to fully comprehend and solve the given word problem. The data collection process of this research is concluded by conducting a semistructured interview with the pupils. The main questions are based on Newman's Error Analysis, a powerful diagnostic tool that involves five stages and is commonly used to assess pupils' numeracy and literacy abilities combined when solving word problems (White, 2009). All recorded results from the written tests and interviews were further analysed and concluded. The following flow chart ( Figure 8) summarises the data collection process. The participants involved in this research were pupils from a small Brunei government primary school between the ages of 8 and 9 and were at the Year 4 level. This is because pupils at this level are at the developing stage where they would be able to read and understand the word problems, although they might not be able to fully comprehend them mathematically. This is also the stage where they will begin to understand mathematical problems that are more complex and they are developing reasoning thinking skills, where they truly grasp and differentiate between right and wrong (Zander, 2019). Pupils at this stage will also be able to communicate and express themselves in words which is advantageous, especially during the interview stage of this research. The three instruments used in the data collection in this research are pre-test, post-test, and semi-structured interviews. The pre-and post-test design is selected for this research as it is commonly used to identify whether modifications that are made to the learning process that causes change in educational outcomes (Dugard & Todman, 1995) are successful or not. Scores from both tests will be recorded and analysed using Excel Software, where a Paired T-Test will be used to compare the mean differences (Hsu & Lachenbruch, 2014) of scores to conclude the effect of using the CUBES Maths Strategy on pupils' learning performance in solving word problems. Newman's Error Analysis Interview will be used to identify the pupils' errors when solving word problems as it provides a link between literacy and numeracy where results completed by pupils will be analysed from the beginning to the end (Rr Chusnul et al., 2017). The theory behind this analysis indicates that there are five stages to solving word problems. The Reading (Decoding) Stage is the initial stage where pupils start off by reading and understanding the given word problem. It is then followed by the Comprehension Stage, where pupils will be able to identify what is given and what is required to be found. The Transformation or Modelling Stage requires pupils to use strategies, methods, or the correct formula to solve problems. The Process Skills Stage checks if the pupils are able to carry out the operation correctly. The final stage is the Encoding Stage, where pupils have to write their answers completed with the correct units. Intervention lessons The first intervention lesson focuses on introducing the new strategy to the pupils, and the objective of the lessons is that pupils will be able to accomplish the first five steps of the stepsto-success checklist when using the CUBES Maths Strategy to solve word problems. As the lesson was done asynchronously, the researcher sent the pdf file of the lesson worksheet, stepsto-success card, and the video link of the lesson to the pupils involved through their parents via the WhatsApp application. The pupils were instructed to prepare themselves with hard copies of both the given worksheet and the Steps-to-Success Card before watching the given video. They were to attempt the word problems while following the researcher's step-by-step teaching in the video to solve the two given example word problems. The first example showed and explained how the CUBES Maths Strategy is to be used when solving word problems, while the second example concentrates only on the first five steps of the steps-to-success checklist. The pupils were also given a couple of word problems to practice while concentrating only on the first five steps to enable them to get accustomed to using the new strategy. Once they had completed their tasks, they were to submit a soft copy of their work to the researcher. From the pupils' submitted work, many of the pupils had not used the new strategy to solve the given word problems. As there was no instantaneous interaction between both the researcher and the pupils, the researcher could not confirm if the pupils had either watched or understood the given video or just went straight to completing the provided worksheet. The researcher then made changes to the second intervention lesson accordingly. Instead of another video, the researcher used an online application, Quizizz, that enables one to create a lesson slide quiz that promotes interaction with the pupils, although not synchronously. The questions given on Quizizz were to test the pupils on the steps of the CUBES Maths Strategy taught in the previous lesson. A word problem was also given to check if the pupils understood the use of the CUBES Maths Strategy. Toward the end of the Quizizz lesson, there were some questions regarding the CUBES Maths Strategy, which acts as the Exit Ticket for the intervention lesson. In addition, it provides the researcher with information on the pupils' thoughts and opinions regarding this new method. Once the pupils completed Quizizz, they had to completely solve the word problems in the lesson worksheet. Figure 9 below shows the summary of the intervention lessons. The last question in the Exit Ticket asked the pupils to state their reason for their answer to question 5. Below are some of the responses from the pupils, where they are divided on using the CUBES strategy. "Yes. Easy for me to answer all the questions." "No. Because it is hard to understand the cube strategy." "No. It is difficult unless if there is more explanation by the teacher." "Yes. Easy to know how to solve." 50% 50% Q3. Do you think the CUBES Maths Strategy is easy or difficult? Pre-and post-tests Among the 20 pupils who initially consented to participate in this research, 7 withdrew, and only 13 participants remained. Hence, only the 13 pupils who participated throughout the research will be considered for the analysis of the data collected. The study started with a written pre-test, and the scores collected were analysed, and Table 1 below shows the descriptive statistics. The central tendency of the pre-test scores shows that the average (mean) of the scores is 5.538, while the median and the mode are both at 6. The minimum score of 3 and the maximum score of 8 show that most pupils are familiar with solving mathematical word problems. The standard deviation of the scores, which is 1.761, shows that the scores are not very dispersed. As for the written post-test, the average (mean) of the scores is found to be 5.769, while the median is 6 and the mode is 8. The standard deviation of the scores is 2.315, where the minimum score is 1, and the maximum score is 8. It was found that from the written post-test, only 3 pupils made use of the CUBES Maths Strategy; however, they only did half of the taught procedure. Samples of these pupils' work are shown in Figure 13, which shows that the pupil has done C, U, B, and S while omitting the E step, Figure 14 shows that the pupil underlined the important information required to solve the given word problem instead of Circle, Underline and Box, and Figure 15 shows the pupil only circled the number and underlined the identified keyword. Other findings, such as careless mistakes, were made by pupils where they wrote the wrong numbers in the algorithm and miscalculated them as well. The descriptive statistic of the written post-test is shown in Table 2. Comparison of pre-and post-test scores To identify the effects of using the CUBES Maths Strategy to solve word problems, the scores from both the written pre-and post-test were analysed. The bar graph ( Figure 16) below displays the overview of the pupils' preand post-test scores. As seen from the graph, there are some pupils whose scores either remained the same, increased, or decreased in the post-test. The boxplot (Figure 17) shows no outliers in the scores, meaning no scores are more than the upper quartile or less than the lower quartile. Hence, this indicates that both the mean of the data set and the skewness of the distribution will not be affected. The data obtained meet the assumptions required to run a Paired T-Test. First, the subjects are independent as each pupil completes their own work for both the pre-and post-test. Second, each pair of measurements (test scores) are obtained from the same pupil on a continuous scale. Third, the distribution of differences in scores is normally distributed as shown in the boxplot (Figure 18), histogram (Figure 19), Normal Probability Plot (NPP) (Figure 20), and the data in Table 3 shows the descriptive statistic of the difference in scores as well as the results of the Shapiro-Wilk Normality Test (Table 4) Regarding the normality of the data, from Table 3, both the mean and median of the differences in scores, which are 0.230769 and 0, respectively, are relatively close. At the same time, the skewness is found to be 0.062927 and is close to 0. The boxplot (Figure 18) shows no outliers present in the data, and the normal probability plot ( Figure 20) shows a straight line indicating that the data is normally distributed. This is also confirmed by the results of the Shapiro-Wilk Test (Table 4), where the p-value is 1.99754 and is greater than 0.05. This means that the data are normally distributed and the null hypothesis is accepted. Although the data obtained meet all the assumptions required to run a Paired T-Test, supported by the above figures, with the sample size being only 13, a Wilcoxon Signed Rank Test will be conducted instead. Hence, using Excel Software and the built-in Real Statistic Resource Pack, a Wilcoxon Signed-Rank Test is run to analyse the scores recorded with α =0.05 and the null hypothesis, H0: There is no effect. The results from the test show that the median of both tests is 6. The sum of the positive ranks T+ is 27, while the sum of the negative ranks T-is 18, which indicates that the test statistics T=18. The p-value is found to be p>0.05 with a small size effect (r=0.096). When using the Wilcoxon Signed-Rank Test Critical Value Table, the critical value of this test corresponds to α =0.05, and the sample size of 9 is found to be 5. Since the test statistic (18) is not less than the critical value of 5, we fail to reject the null hypothesis. Hence, it is identified that there is no effect when using the CUBES Maths Strategy on pupils' learning performance in solving word problems. Newman's error analysis interview analysis Due to the small sample size, all the participants were to be interviewed to better identify the types of errors made by pupils when solving word problems by checking each pupil individually. However, out of the 13 pupils, only 10 pupils were able to make time to be interviewed through the Zoom meeting platform. The word problem that was used while interviewing the pupils was the last question from the written post-test paper because, from the breakdown of scores, the last question was identified to be the question that produced incorrect responses from 9 out of 13 of the pupils. Hence, all the pupils were interviewed using the same questions regardless of their answers in the written post-test. From the interviews, it was found that some of the pupils who committed careless mistakes could identify their errors while referring to their post-test papers during the interview. Table 5. Interview results based on Newman's error analysis ID Reading Comprehension Transformation Process Skills Encoding Note: X / Pupils have no problems in that stage Pupils have problems in that stage There were 4 interviews that were conducted bilingually and mainly in the Malay Language as the pupils had difficulties understanding what was asked or spoken in the English Language. Out of which, 2 of the pupils could not read the given word problem without assistance from their parents, where they were repeating word by word after their parents. Some pupils could read the given word problem smoothly but struggled when reading the given numbers mathematically. Being able to read does not indicate that they are able to fully understand and comprehend the given word problem. It was identified that 70% of the pupils could not fully comprehend the given word problem. Most of the pupils, when asked, 'Can you explain what the sentence "Cinema B sold 3826 more tickets last month." meant?' replied by repeating the sentence. When followed by the question 'Which is more/less?' 7 out of 10 pupils answered by identifying the larger or smaller number respectively. This indicated that the pupils did not fully comprehend the word problem. Although 8 of the pupils could identify the correct operation to be used, only half of them could make the connection between the operation to be used from the given keyword. However, this does not indicate that they are free of the transformation error, as they might just recall from their prior knowledge regarding the word 'more'; it suggests that addition should be used. This seems to be the case for 2 pupils as they were unable to comprehend the word problem but were able to identify the correct operation to be used. From the results obtained, although only 40% of the pupils had errors in the process skill stage, there was a pupil who had errors in the transformation stage but was able to perform the mathematical calculation accurately. During the interview, this pupil was able to identify the correct operation to be used but was unable to justify it. As errors in the process skill stage occur only when the pupil cannot complete the operation accurately, the fact that the pupils solve the word problems without using the CUBES Maths Strategy or following the given procedures was ignored. It was found that only 3 pupils were seen to have solved the word problems in the post-test paper using the CUBES Maths Strategy but were only applying the first three steps of the procedure, as shown in Figure 13, Figure 14, and Figure 15 above. When asked in the interview why the next step was omitted, they stated that the fourth step (Examine the problem) is difficult; hence they skipped the step and proceeded to solve it. The remaining pupils who did not use the strategy mentioned that it was because they did not understand it. For the Encoding stage, pupils who were able to identify the mistakes made in their posttest during the interview were seen to recalculate the numbers and give the correct answer by representing it appropriately with the correct units. The same happened for those who answered the question correctly but did not include the correct units in their post-test paper, while the other 40% of the pupils were unable to state the correct answers with the appropriate units. Therefore, the results shown in Table 5 above show that most of the pupils have errors in both the Comprehension (70%) and Transformation (50%) stages. Discussion Due to the COVID-19 restrictions and policies, the research was conducted not only in a remote setting but also asynchronously. This is because, during this endemic stage, pupils of the primary sectors still study from their homes while most parents work from their offices. This meant there might be a lack of adult supervision while the pupils attended the synchronous live classes. Another reason for classes being conducted asynchronously is due to the fact that the majority of the pupils do not own personal devices and have no stable internet connection, which results in them having to wait for their parents to get home from work to use their devices. In addition, those with multiple siblings who share only one device among themselves meant that they might not be able to attend the synchronous classes during the assigned period. This is especially the case for pupils from families with socio-economic disadvantages who cannot afford the necessary gadgets or reliable internet access (Qazi et al., 2020) that allows them to be connected and attend the online classes held by their teachers. As most of the pupils do not own a personal device, the means of communication between the pupils and teachers is through their parents via the WhatsApp application, one of the most used applications that can be utilised for multiple works and purposes, including learning (Kholis, 2020). Although this application is convenient for sending instant messages and supports the function of sending multimedia messages, because the parents are the middle persons, it reduces the social interaction that is essential for children's learning and development. The study of Hurst (2013) concluded that interaction between peers highly contributed to learning as it enhanced both the pupils' critical thinking and problem-solving skills. Okita et al. (2007) found that when novel social variability is present in one's responding tone during interaction with one another, it leads to superior learning. With the lack of interaction between the researcher and participants, pupils were seen to have difficulties relating to the lesson contents and felt uncertain about what the teacher was explaining (Syahputri et al., 2020). The researcher is also unable to identify if the pupils fully understood the content of the given lessons (Mukhtar et al., 2020) and if further clarifications were required as there was no feedback regarding the lessons taught. In the case where feedback is given, it is not immediate as it requires one to wait for the other to review the given tasks and give comments (Vlasenko & Bozhok, 2014). In addition, the researcher is also unable to observe the pupils when they are answering the pre-and post-test papers and is unable to check and verify if the pupils are taking the tests on their own or with assistance from others. As the pupils do not own their own devices, the researcher had to schedule appointments with the pupils for interviews by relying on their parents' availability. The first intervention lesson was conducted with a video that introduced the CUBES Maths Strategy, which includes two examples of solving word problems using a straightforward step-by-step procedure of approach. When using a video, pupils can easily access it, allowing them to pause, reverse or replay the video (Bell & Bull, 2010) whenever required. The video must show clear visuals and audio, enabling the pupils to follow and understand what is being shown and explained. There was no instantaneous feedback, and the only interaction was when the pupils submitted their given worksheets which showed that most pupils did not use the newly taught strategy. This leads the researcher to presume that the pupils were neither fully attentive when watching the given video nor were they taking note of the instructions and might have just downloaded the given worksheet and completed it as it was. For the second intervention lesson, taking the first lesson and its outcome into consideration, the researcher then used the Quizizz Online Platform to create a lesson-like quiz. This game-based education application includes interactive features where participants will be able to draw and write on the screen where required. As the application is widely used to encourage pupils to have friendly competitions among their peers and motivate them to study (Zhao, 2019), the researcher hopes that by using this application, the pupils will be more interactive and take the initiative to participate in the lesson. The responses from the pupils indicated that the pupils were struggling in the part of Examine the problem but did not reach out to the researcher regarding this matter, and the researcher was unable to clarify and assist the pupils. The findings of this study which identified the pupils' errors are similar to the analysis done by Raduan (2010), which showed that many of the mistakes made when solving word problems were due to comprehension and were followed by transformation skills. This study also showed that the majority of the pupils did not have any difficulties in reading which is similar to Triliana and Asih (2019)'s findings, where the pupils were able to read the words accurately; however, they were unable to fully understand the overall problem including specific terms that were in the problem. An Exit Ticket section was included in the Quizizz, which asked the pupils about their thoughts regarding using the CUBES Maths Strategy when solving word problems. However, only 10 pupils did the interactive activity, and from their responses, the researcher conjectured that the pupils do think that the CUBES Maths Strategy is simple for them to use to solve word problems. However, they need more explanation to fully understand how to use the strategy easily. This also indicates that the pupils are not ready for full independent learning and are highly reliant on teachers. Jorgensen (2003) stated in her research that although she believes that Asynchronous Learning Networks (ALN) are able to foster and offer a rich collaborative learning environment, they might not be suitable for everyone. She also mentioned that young college students may still require a face-to-face learning environment, suggesting that pupils much younger than that level would highly require the traditional face-to-face classroom settings. In reference to the responses received from the pupils, it showed that the pupils were willing to learn more about this strategy and were not applying it because they had not fully understood it. Hence, there is a need for teachers' professional development regarding the use of the CUBES Maths Strategy, which will allow teachers to receive proper training and gain the adequate knowledge required to teach this strategy. In addition, teachers will be able to explore the strategy as a group and produce multiple ways to help pupils to understand the strategy compatible with their individual abilities and needs. As the pupils are found to be struggling in both the comprehension and transformation stages, it is important for the teachers to assist pupils in processing the given information so that they are able to translate the sentences from words to numerals. Teachers can also help pupils to relate the word problems and perhaps include the use of diagrams to help them to visualise the problem. Conclusion Although this study found no significant difference in solving word problems using the CUBES Maths Strategy, Newman's Error Analysis found that the problem lies even deeper where pupils in this study could not comprehend the word problems. Even if the pupils were able to Circle, Underline, and Box the important details, they are still short of the E and S parts of CUBES, where without comprehension, the pupils are still unable to Examine the problem and are even find it difficult to Solve. Furthermore, the remote learning situation exacerbated this problem where synchronous interaction between teacher and pupils was absent. Funding Statement This work received no specific grant from any public, commercial, or not-for-profit funding agency.
7,746.2
2023-01-02T00:00:00.000
[ "Mathematics", "Education" ]
Semantic-aware transformation of short texts using word embeddings: An application in the Food Computing domain Most works in food computing focus on generating new recipes from scratch. However, there is a large number of new online recipes generated daily with a large number of users reviews, with recommendations to improve the recipe flavor and ideas to modify them. This fact encourages the use of these data for obtaining improved and customized versions. In this thesis, we propose an adaptation engine based on fine-tuning a word embedding model. We will capture, in an unsupervised way, the semantic meaning of the recipe ingredients. We will use their word embedding representations to align them to external databases, thus enriching their data. The adaptation engine will use this food data to modify a recipe into another fitting specific user preferences (e.g., decrease caloric intake or make a recipe). We plan to explore different types of recipe adaptations while preserving recipe essential features such as cuisine style and essence simultaneously. We will also modify the rest of the recipe to the new changes to be reproducible. Introduction Our dietary habits have a huge impact on health and, thus, in quality of life. In the last decades, the amount of nutritional data available has notably increased. This fact, together with the ubiquity of smartphones, has encouraged the use of machine learning techniques for automatizing some tedious and repetitive tasks as diet generation. In this context, the food computing concept refers to the use of food data to improve the quality of life as well as understanding human behavior (Min et al., 2019). Recipes and their composition have been largely studied in food computing, especially in the food recommendation systems field (Teng et al., 2012). These systems mainly perform recipe-based nutrition assessment, looking for suitable combinations to user preferences. The use of predictive algorithms to understand relations between recipes has emerged in the last years (Sajadmanesh et al., 2017). Recently, authors have taken advantage of these tools to generate synthetic food data. Recipe generation is a current area of research, and the latest works in the area have put their interest in the creation of synthetic recipes. However, these works have focused on automatized text generation from scratch instead of taking direct advantage of the already existing recipes to generate new versions. In this thesis, we will address the problem of partially-generation of recipes. Particularly, we will put our effort into recipe adaptation and recipe completion tasks. Online cooking communities and social media generate daily a huge amount of food data, mostly cooking recipes that users want to share with the world. In these communities, many users review the shared recipes, often giving feedback, customization, and suggestions for tasty versions of a given recipe. We plan to use this information to generate new recipe versions. Particularly, we will modify recipes to fulfill the user's requests. There are many reasons to modify a recipe, e.g., a diet restriction such as vegan or vegetarian diets, a lack of ingredients at home, to make the recipe tastier or cooking a kid-friendly version. Also, many users follow restricted diets linked to nutritionist personalized assessment. A user would require a light version of a given recipe or including high-protein ingredients, among others. We propose to automatize the process of ingredient modification in a recipe and extend this idea with a recipe completion task. In both cases, we can consider several criteria simultaneously, such as those mentioned before. Thus, we tackle twofold challenges here; we have to preserve the semantic of the recipe and its essence while combining heterogeneous sources to incorporate nutrition and user knowledge during the adaptation. Here, a specific-domain language model can able us to tackle both purposes. We propose to use a fine-tuned word embedding model as the base of our contribution. We will use it to model the recipe ingredients to incorporate useful information from external sources (i.e., complete the ingredient data with nutrition information, user tips, and cuisine styles). Then, we will use the merged data in an adaptation function to find the most suitable foods to adapt a recipe to given restrictions. The semantic information combined with the external data will be the base of the adaptation engine. But adapting recipes do not only consist of dealing with ingredients. Likewise, we will use this model for a synthetic adaptation of title, recipe steps, and extra recipe data affected in the process. Related work Cooking recipes have been largely explored in food computing (Min et al., 2019). Last recipe-based works in food computing have surrounding agreed with the advantages of data mining techniques to understand how people cook. Regarding the use of natural language processing approaches to resolve food computing tasks, they have been mainly focused on the analysis of cuisines and ingredient relations (Min et al., 2019). From a wide perspective, these relations have been addressed by using the textual description of foods and flavor networks. The latter has been widely studied with statistical natural language processing methods (Takahashi et al., 2012;Chen, 2017;Chang et al., 2018). Our proposal is particularly related to the following topics. Recipe generation and completion Creative cooking is the food computing area focused on the automatic generation of new recipes. Here, there is a distinction based on the approach. Synthetic recipes are created in two main ways. One is recipe completion, able to generate synthetic partial recipes from already existing ones. Completing recipes has also been studied in the frame of food recommendation systems. In (Cueto et al., 2019), the authors tackle the problem of completing partial recipes by using context-based recommendation. Recipe generation tasks have also considered the cuisine style for adapting recipes to other cultures (Kazama et al., 2018). In this case, they propose a neural network method to change ingredients for their equivalents in other cuisines. Regarding recipe generation, cooking recipes have been generated with natural text generation tasks (Aljbawi, 2020). Due to the repetitive results that are usually obtained with this approach, the authors in (Bosselut et al., 2018) proposed a synthetic recipe generation model that considered a reward to get more coherent and less repetitive texts. Word embedding in food computing Word embedding models in food computing have been mainly focused on ingredient analysis. One of the more relevant works in this area is food2vec, where the author used a word embedding model trained with lists of ingredients to understand relations between ingredients and cuisines of the world (Altossar, 2015). Recipe2vec is another model trained in food data, in this case, for recipe retrieval purposes (BuzzFeed and Tasty, 2017). It has been mentioned the many advantages of embedding models referring to fusion heterogeneous food data for multiple purposes, where nutritional and social media textual data are integrated (Salvador et al., 2017) more specialized in resolving image recognition tasks rather than language processing. In (Chen et al., 2019), the authors used a word embedding model to detect ingredient relations to create pseudo-recipes. They used a model trained on a list of recipes to detect which ingredients appear together in recipes. They created a pseudo-recipe object based on this idea. Transfer learning The state-of-the-art in NLP tasks is based in transfer learning models. It is very useful for specific-domains where data are limited since general-purpose models will perform poorly. This approach allows to train models with a bigger capacity but capturing the subtle essence of the problem addressed. The most well-known models using fine-tuning for specific tasks are BERT (Devlin et al., 2019) and GPT-2 (Budzianowski and Vulić, 2019) with excellent results. Transfer learning has been used in different specific areas, e.g., in biomedicine . To the best of our knowledge, transfer learning has not been proposed to extract semantic information from food item descriptions to combine heterogeneous sources. Conditional text generation Controllable text generation is the area where sentences' attributes can be controlled by factors such as age, gender, or style (Prabhumoye et al., 2020). In this problem, we have a sequence output that is conditioned by the sequence input. Text generation language models have to assess the need for controlling specific parts of the task for resolving a specific problem (Keskar et al., 2019). In this line, recent approaches have put interest in style transfer techniques. Text style transfer has allowed adapting a synthetic text to different situations such as audiences, complexity, and other contextual circumstances (Li et al., 2020). Recent style transfer algorithms employ parallel data in supervised learning approaches and non-parallel data in seq2seq architectures for unsupervised approaches. Also, Variational Auto-Encoders have been applied for this aim by separating content and style in the latent space for better adjustment of the style (Fu et al., 2018). Proposed methodology We have divided our approach into two tasks explained in the following subsections. Heterogeneous data-handling The first problem that appears when modifying a recipe is obtaining enough food knowledge to be able to generate recipes that fulfill user preferences. One of the main challenges to address in food computing is the inherent difficulty in using food features from many different nutritional sources. Consequently, food items need previous processing to handle them jointly. According to this idea, we can use the item textual description to identify equivalent items between databases, allowing the joint use of these databases as a unique data collection (Morales-Garzón et al., 2020). Notice that ontology-based methods could perform well in this problem. But these models have problems when applied to ingredient-based tasks. They do not represent high detailed ingredients, and also have difficulty generalizing to online recipes. Furthermore, knowledge extraction has to be hand-crafted. To overcome this, we propose to model ingredient descriptions with a word embedding model. This unsupervised model can deal with arbitrary-sized text and capture the semantic of cooking. Models Since the food domain is very-specific, general-purpose word embedding models will perform poorly. This issue can be solved by using pre-trained models and perform transfer learning. Deep models will be trained in large unlabeled text databases and, then, fine-tuned to the cooking domain. This approach will be able to capture automatically the semantic of cooking without human supervision. First, we will do a transfer learning task with a BERT language model (Devlin et al., 2019). Using BERT will able us to deal with one of the more compound facts when cooking: a same ingredient can be used in different forms and meals (e.g., a user could use flour for a cake, but also frying fish). In a sentence-based model, we will be able to represent the current context in which an ingredient is used. This fact will able us to find better food alternatives for each ingredient. We plan to test the performance of our model replicating the process with GPT-2 (Budzianowski and Vulić, 2019). The main difference between BERT and GPT-2 is while BERT looks at the context of the word, GPT-2 only looks backward. In this thesis, we will explore both and discern the advantages of each one for the cooking domain. Distance metrics We understand ingredient mapping as the search for an equivalent food in an external source. This similarity can be obtained by calculating the distance between ingredient descriptions. We consider an ingredient description as a short description text (e.g. "almonds toasted"). We plan to use food representations obtained with the embedding vectors to find food equivalences within databases. We plan to test the model performances with different metrics including word mover's distance as a baseline metric. We also plan to use a distance metric proposed in (Morales-Garzón et al., 2020), which has demonstrated to work remarkably well with food data descriptions. Dataset We plan to use a pre-trained word embedding model trained on Wikipedia and Book Corpus datasets 1 . We will re-train the model in a food-based textual corpus. To do this, we will use a large recipe dataset available in archive.org 2 . The dataset contains more than 200,000 recipes with their preparation step texts. These texts contain meaningful information about the science of cooking such as ingredient combinations and cooking processes. Adaptation engine Deciding the most profitable version for a recipe is a very subjective process. Consequently, following human adaptation rules is difficult and very tedious. Our approach consists in using word embedding vectors to represent an existing cooking recipe. For that, we will extract the ingredients from a recipe, and we will obtain their embedded representations with the transfer learning model. Once we have a representation of the ingredients, we proceed with adapting them to fit the user requirements. We will take advantage of the captured information in the model to adapt the ingredients (e.g., similarity relations between foods), with the aim of preserving the recipe essence. In this way, semantic relations between ingredients can influence the decision when changing an ingredient for other that fits in the recipe. Besides, not only changing the ingredients will result in a finished recipe. We will also generate automatic text from the ingredient list to make coherent cooking instructions. Thus, the process consists of three steps: (1) obtain a semantic representation of the ingredients, (2) adapt the recipe by changing the ingredients to other foods that fit, (3) modify the rest of the recipe accordingly, i.e., recipe preparation steps, nutrition information, and title if need. Since title and nutrition data can be easily obtained from the final ingredients, the challenge resides in altering the preparation text. Conditional generation and style transfer techniques will be used in this last step. At the end of this process, the user will have the full recipe with the list of ingredients and the cooking procedure, being able to reproduce it at home. See Figure 1 for a better understanding of this process. Recipe modeling First, we will model a recipe with the transfer learning model. The ingredient in-formation contained in an online recipe is short and may not be sufficient for making a quality adaptation. As introduced, we plan to combine the ingredients with food features such as cuisine style, nutrition information, packaging information, cooking tips, and potential ingredient relations. Unfortunately, this information has to be obtained from external heterogeneous sources. We will join this information in one object, merging the ingredient data with food knowledge from these external databases. Subsection 3.1 describes this procedure. Ingredient adaptation One part of the recipe is properly adapting the ingredients. There are two main ways of adapting a recipe. In the first case, some ingredients of a recipe are replaced following a criterion, e.g., converting a given recipe into a vegan version, and, in the second case, it consists of suggestions to add new ingredients. In both cases, we can consider several criteria simultaneously. For example, the users would like to do a recipe but with fewer calories or more proteins.Here, we will design the proper adaptation function according to a multiobjective optimization problem with restrictions, e.g., maximizing the use of sweet ingredients while minimizing the calories. This has to be subjected to maintaining the coherence of the recipe. Notice that only similarity-based functions will be suitable for maintaining the coherence of the recipe but they do not take into account other factors like calories. Thus, the ingredient adaptation task will consider the joint ingredient data obtained from the combination of the ingredient with exter-nal sources. Thus, we will be able to add adaptation knowledge to this step. Feeding the adaptation procedure We can make use of user interactions with recipes to obtain information about how users react to some recipes. We will use online user interaction data with recipes to be considered in the adaptation function. We can exploit this data to measure which ingredient combinations are more appealing for the users. We will analyze this data to extract knowledge to feed the adaptation function. Adaptation of the rest of the recipe Adapting a recipe does not just consist of changing the ingredients for another suitable option. We also need to adapt the preparation step to fit with the new ingredients. This part is compound because it needs to remain the coherence of the original recipe when possible. We plan to explore the use of word embedding approaches to partially-generate synthetic text using keywords. We will part from the original recipe, detecting those steps that must be modified. Notice that some recipe objects also contain nutrition tags for a serving. In this case, we will adapt this information using the ingredient data if allowed. Dataset We plan to study recipes in specific cuisines. For that, we will use recipes extracted from Yummly. One of the tags stored in Yummly recipe data is the geographical origin of the recipe. There are several Yummly datasets online that we can use, with ingredients, preparation texts, and cuisine type 3 . Additionally, the Yummly website provide users' reviews, with their suggestions for altering the recipe, and recommendations of ingredients substitutions (and additions) to improve the taste of the dish. Regarding nutritional food data, there are opensource nutrition dataset available for obtaining food data from the most common foods and dishes. One example is the USDA database, maintained by the Department of Agriculture in the United States (Gebhardt et al., 2008). There are also market product sources for access to typical food in specific zones of the world. Open Food Facts 4 is an open-source project with the aim of make worldwide food products accessible. There are available resources about how users interact with recipes. The Food.com dataset 5 available in Kaggle provide this info for more than 200,000 recipes from the popular cooking site Food.com 6 . Evaluation Validating recipe adaptations is a subjective procedure. Depending on the cultural factor, the type of meal, the flavors, and other intrinsic combinations, what could be an excellent recipe for a user, could result to be untasted for another different one. This variability makes it difficult to measure the adequacy of an adapted recipe. To tackle this variability, we plan to evaluate the proposed method with an online survey on both regular and expert users. For this, we will generate adapted recipes for different circumstances. Each recipe will receive a score, where the lowest value represents that the adapted recipe is disgusting and the highest is a very succulent recipe. Also, we plan to obtain adaptation suggestions in this step to use them as feedback for future improvement. Strengths With the arising of technology and, consequently, the large amount of recipes shared on the internet, food computing has played an undeniable role in recipe retrieval systems. These systems allow access to online recipes to speed up the recipe searching whenever a user wants to prepare a dish. We believe that the integration of our approach in the cited software could meet user needs when looking for cooking inspiration. Additionally, it is worth noting that a recipe-based word embedding model could be able to participate in multiple problems of food computing. One of its applications is using them for detecting recipe similarity to ensure variety in nutrition assessment systems. We believe that food computing is not the only application of our approach. Personalized beauty treatment is another area in which our proposal could be useful. Commonly, there can be found on the internet many natural beauty care recipes consisting of a list of ingredients and instructions to create beauty remedies for different purposes. Among other many factors, this kind of treatment handles user expectations, allergies, and the cos-metic composition of the treatment. A transfer learning model in this area could be applied to adapt these kinds of treatments to the user's needs. Summary Our proposal consists of using a transfer learning model in the food domain to adapt recipes to fulfill user needs. The challenge remains in using the model for two different tasks. First, we plan to use the model to complete ingredients information with data from external sources, such as nutritional data or cuisine traditions. Thus, we will employ this joined data for adapting a recipe to fulfill a need. Then, we will use the language model to adequate the rest of the recipe to be consistent with the adapted ingredients.
4,667.6
2021-01-01T00:00:00.000
[ "Computer Science" ]
Generation of spherically symmetric metrics in f(R) gravity In D-dimensional spherically symmetric f(R) gravity there are three unknown functions to be determined from the fourth order differential equations. It is shown that the system remarkably may be integrated to relate two functions through the third one to provide a reduction to second order equations accompanied with a large class of potential solutions. The third function, which acts as the generator of the process, is F(R)=df(R)dR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F(R) =\frac{\mathrm{\mathrm{d}}f(R)}{\mathrm{d}R}$$\end{document}. We recall that our generating function has been employed as a scalar field with an accompanying self-interacting potential previously, which is entirely different from our approach. Reduction of f(R) theory into a system of equations seems to be efficient enough to generate a solution corresponding to each generating function. As particular examples, besides the known ones, we obtain new black hole solutions in any dimension D. We further extend our analysis to cover non-zero energy-momentum tensors. Global monopole and Maxwell sources are given as examples. Introduction f (R) gravity is one of the modified theories of Einstein's general relativity that attracted much attention in recent times [1][2][3][4][5][6]. In [7] f (R) = R + α R 2 with α > 0 has been introduced as the model of inflated universe while f (R) = R − α/R n (α > 0, n > 0) was considered as a candidate for the dark energy model [8][9][10][11][12][13]. This model, however, is not a viable model for dark energy and instead f (R) = R − α R n with α > 0 and 0 < n < 1 emerged as alternative which has been proposed in [14,15]. Later on more viable models were studied in [16][17][18][19][20]. A detailed review of these models is in [21] (other review papers are in [22][23][24][25]). Some recent works on solutions in f (R) gravity are . For f (R) = R, a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>in D = 4 dimensional spacetime it coincides with standard general relativity but otherwise it comes with an action that is arbitrarily dependent on the Ricci scalar. Finding exact solutions in this theory with fourth order derivatives of the metric tensor is both important and challenging. Apart from exact analytic solutions there are f (R) models that can only be expressed implicitly in non-polynomial expressions. Each particular model has advantages/disadvantages as far as experimental tests are concerned [49][50][51][52][53][54][55]. There are even models that lack the Einstein's R-gravity limit. Among other expectations the UV/IR behaviors at near/far distances, quantum renormalizability with power counting of the counter terms are prominent. At any cost preferring to abide by the classical regime we confine ourselves first to sourceless (vacuum) f (R) models that admits exact integrals. In the last section we extend our discussion to cover external sources such as global monopole [56] and electromagnetic field. Let us add that the equivalence of f (R) gravity to Brans-Dicke (BD) theory (ω = 0) with a potential has also been highlighted extensively in the past as a transition between Jordan and Einstein frames. In this approach the exact solutions can be generated by adopting scalar field ansatzes which in general brings into the Lagrangian intricate potentials. Our method will be confined entirely to the Jordan frame without reference to the BD field or any scalar potential. The vacuum of f (R) gravity is known to carry its own curvature sources. By vacuum in this theory is meant the absence of an external energy-momentum tensor T μν , of any physical source [57]. Carames and Bezarra de Mello in the latter work have considered the spherically symmetric vacuum solutions of f (R) gravity in higher dimensions. We shall rederive most of their results anew together with some additional extensions which we are interested in as we can to develop this further. Addition of external T μν = 0, no doubt makes the problem technically more complicated, but following the lesson learned from the vacuum/empty solutions of f (R) = R theory of gravity we will attempt to derive the most general equations and in some specific cases the solutions as well. The D dimensional spherically symmetric line element that we shall consider will be in which A(r ) and B(r ) are metric functions to be determined while d 2 D−2 represents the (D − 2)-dimensional unit spherical line element. In particular integrals we have the restrictive case A(r ) = B(r ) included, but more general cases with A(r ) = B(r ) must be interesting as well. Beside the metric functions A(r ) and B(r ) we shall employ a third function denoted by d R , which characterizes the type of the f (R) gravity. Let us add that not in all cases of f (R) models the explicit form of f (R) can be expressed analytically in terms of the variable R, the Ricci scalar. Instead, it involves a transcendental part that cannot be inverted in the form of r = r (R), these are hybrid forms. We add that even these hybrid forms do not prevent us from calculating d f d R > 0 and d 2 f d R 2 > 0, which are crucial terms to determine the absence of ghosts and thermodynamic stability, respectively. In brief what has been achieved in this paper is to show that the metric functions A(r ) and B(r ) are related through an integral expression for the function f (R) (or f (r )). This amounts to the fact that once f (R) is given it acts as a generator to generate a new set of (A(r ), B(r )) pair. The function A(r ) is expressed in terms of B(r ) and f (R) and the remaining equation is reduced into a master equation satisfied by B(r ). Once we give an ansatz for f (R) our master equation can be integrated in principle to obtain B(r ). In this manner we can obtain an infinite class of metrics in f (R) gravity generated from an infinite set of f (R). No doubt the dimensionality of spacetime D (= d + 1) also plays a role in the derivation. In particular, we present examples of new black hole solutions in D ≥ 3, by the method described above. We wish to add also that in the reduction process the system of differential equations in f (R) gravity reduce naturally from the fourth order to the second order. The paper is organized as follows. In Sect. 2 we rederive the f (R) field equations in D dimensions, which is comparable in some sense with [57]. A number of examples to justify the effectiveness of our method are given. Generalization to T μν = 0 is analyzed in Sect. 3. We end our discussion with our conclusions in Sect. 4. The field equations in D dimensions The D-dimensional vacuum f (R) gravity is represented by the action in which f (R) is a function of the Ricci scalar R and D ≥ 3. Variation of the action I with respect to g μν provides the field equations (in the metric formalism) in which F = d f d R and is the covariant Laplacian. The general spherically symmetric line element is given by (1) and the field equations (3) are explicitly given by and in which and Herein and in the rest of the paper, D k = D − k, a prime stands for the derivative with respect to r , and A particular combination of these three equations leads to two equations which are independent of f . The first of Eq. (4) may be written which can be integrated to where The second one of Eq. (5) upon considering (11) becomes independent of A too. The closed form of the second equation reduces to a linear equation for B(r ), which is given by where and Finally, the explicit form of f in terms of r is given by To complete our analysis we give the explicit form of the Ricci scalar in terms of r , In summary, the only equation to be solved is Eq. (13) which is second order and linear for B. Therefore the procedure is reduced to set a f (R)-which eventually represents the form of f (R)-and solve the only equation, Eq. (13), to find B (r ) and consequently A(r ). Applications of the method Before we give certain applications for our formalism we would like to compare our approach with the work of Carames and Bezerra de Mello [57]. The main difference can be seen from the fact that in [57] there are two generating functions (so to say) which are F and Y . In other words Eqs. (16) and (17) of [57] are coupled and one must consider them together to find a solution to the field equations. In our formalism we have only one generating function, which is F, and Eq. (13) is the only equation to be solved. In the following three cases we shall show that for simple cases (F = 1 and F = 1 + αr ) our results overlap with [57] but for more complicated case (F = αr a ) our solution is the general one while the solution given in [57] is a restricted one (look at Eq. (46) in [57] and (35) and (36) in this work). F(r ) = 1 We start with the simplest case, with F = 1 or equivalently in which −2 is an integration constant to be interpreted as the cosmological constant. The main equations (11) and (13) admit for the integration constants C 1 and C 2 and consequently Finally the solution becomes in which we set where M is the ADM mass. The solution for D > 3 is Schwarzschild de/anti de-Sitter black hole solution and for D = 3 it is the BTZ black hole. Let us add that f (R) = ξ =constant, does not change the nature of the solution for the metric with constant scalar curvature. We note that our results in this section expectedly is the same as Sect. 3.1 in [57]. F(r As regards Sect. 3.2 of Ref. [57], we consider F (r ) = 0 in our general formalism but instead of going through a general D-dimensional solution, we investigate the cases in closed form for f (R). From Eq. (10) with F = 0 one obtains A = B (up to a constant which one can set it unity via a redefinition of time). For an arbitrary D the solution for B may not be possible in a closed form but for specific dimensions we may find. D = 3 In three-dimensional spacetime the solution for B becomes and where C 1 and C 2 are integration constants with the curvature scalar The hybrid relation between f (r (R), R) and R is given by Note also that from the foregoing expressions we can identify the cosmological constant by C 2 = − . Furthermore, setting α = 0, one recovers the previous example with C 1 scaled, which suggests that it is related to the mass of the central object. D = 4 In four dimensions the solution is given by with f = −6C 2 − 36C 1 α 3 ln 1 + 1 αr and R = −12C 2 − 72C 1 α 3 ln 1 + 1 αr We see clearly the role of C 2 = − 3 and we wish to proceed with C 1 = 0, which implies so that This is a black hole solution with a singularity at r = 0 such that If we set = 0, with α < 0 we may introduce a horizon for the solution located at while for α > 0 the solution possesses a naked singularity at r = 0. To complete our investigation let us determine the absence of ghosts and thermodynamic stability of the explicit f (R) found in (31). We see that d f Clearly both conditions cannot be satisfied simultaneously. F(r ) = αr a Our next example is a power-law form for f (R), i.e., with constants α and a, which upon using (11) yields Substituting into (13) one finds in which C 1 and C 2 are the integration constants. Using A and B we also find and We also note that although α and a are two arbitrary constants a must satisfy a = −D 2 , 1 ± √ D 1 . It is remarkable to observe that in Sect. 3.3.2 of [57] the same ansatz for F has been considered but the solutions to the field equations (see Eq. (46)-(49) of [57]) are not the same as what we found here is more general. As a matter of fact our solutions (35)- (38) with C 2 = 0 reduce to their solutions. This shows that reducing the field equations into a master equation with a single generating function makes advances in finding exact solutions in f (R) gravity. In D = 3 dimensions the solution becomes rather specific since the last term vanishes for all values of a. The functions then read and This is nothing but the solution found by Zhang, Liu and Li in [58] with their parameters and Note that which yields f ∼ R 2 for the specific choice a = 1 3 . For D ≥ 4 one may set C 2 = 0 and therefore with an analytic relation for f (R) given by in whichᾱ is a constant which can be set to unity (by a fine choice of α). In the case of a = 0 the theory gives R gravity. The solution is a black hole with a singularity at r = 0. Let us also add that with C 1 = 0 and a = 1 the solution reduces to in which and This represents a global monopole-type solution with a deficit angle. Another interesting setting is for a = −2 and C 2 = 0, by which upon making a proper choice of α one finds and the solution becomes while Clearly D = 10 and D = 4 are excluded. D = 4 is not allowed directly from (10) where H = 0 with F = α r 2 demanding AB = 0, which is not acceptable. For D = 10 the particular solution can be obtained: we have and with the Ricci scalar The metric function A (r ) follows accordingly from (11), it is given in (54). We comment that f =ᾱ R 1− a 2 does not satisfy d f d R > 0 and d 2 f d R 2 > 0 simultaneously unless we set a to be negative (Note thatᾱ = 1 is needed to have the Einstein gravity recovered.). For instance withᾱ = 1 and a = −2, which implies f = R 2 , and both conditions are satisfied. A new black hole solution in D = 3 In [58] where f (R) = R d+1 in three-dimensional spacetime with d = const. has been studied, the solution does not cover the case d = − 1 2 , which makes f (R) = √ R. This can be seen from Eq. (12) of [58] and Eq. (45) (note that 3−a 2(1−a) = 1 has no answer) and in that paper both do not cover the case f (R) = √ R. However, in the following we wish to show that the solution f (R) = √ R can easily be obtained in three dimensions. To do so let us consider F(r ) = βr exp (αr ) (58) with α and β two real constants. This yields and Note that C 1 and C 2 are two integration constants such that C 2 effectively plays the role of a cosmological constant. The solution is a black hole with a horizon located at r h = −C 1 with the condition that −C 1 C 2 > 0. Also from R we see that the solution is singular if and only if C 1 = 0. In the sequel we are interested in C 2 = 0, which makes the solution rather simple but singular. Accordingly the forms of f and R are given by by which upon tuning the free parameter β by 4αβ 2 C 1 = 1 the form of f becomes Here α and C 1 are positive constants and the line element finally reads in which r 0 = 1 4αβ 2 is also a positive constant. A general class of solutions in 2 + 1 dimensions In three dimensional spacetime in addition to what we found by now, we wish to show that there exist an important class of solutions yet to be discovered. This specific class, however, is a characteristic feature of only three dimensions. To see this solution let us set D = 3 in (13) which yields where we substituted B(r ) by A(r ) using (11) i.e., Equation (67) possesses a trivial solution for A(r ) irrespective of the form of f (R), which is given by in which C 0 is an integration constant. In this situation the field equations are all satisfied provided B(r ) and A(r ) satisfy the condition (68) i.e., One can see easily that with F = ξ =constant we obtain The solution given by (69), (71), and with η = const., constitutes a particular class. Note that since the choice of f (R) in (70) is arbitrary this can be used to generate an infinite class of solutions. This will not be searched any further here. 3 + 1-dimensional black hole solution in Previously we found an exact solution for the model of gravity in the form with α = 0. The solution to the field equations is given by The solution is a black hole solution with α < 0 and therefore one may write in which the horizon is shown as r + . A change of variables of the form t = √ 2T, r = ρ √ 2 reduces (76) to which is the Schwarzschild black hole with a deficit angle caused by a cosmic string. Generalization to f (R) gravity coupled to matter sources In this section we extend our vacuum analysis to the presence of matter coupled with gravity. Therefore the action becomes in which L m is the matter Lagrangian density. The field equations become with T ν μ = diag (−ρ, p, q, q) the energy-momentum tensor of the matter source. The line element is going to be a spherically symmetric as (1) and without going through the details of the field equations, we give the changes in the field equations (11) and (13). The field equation corresponding to Eq. (11) reads while the main Eq. (13) i.e. the master equation for B(r ) takes the form in which P, Q, and H are given in (14), (15), and (12), respectively, and Note that, unlike the source-free case, here the master B equation is not a linear equation. It can also be observed that the particular choice of ρ + p = 0 removes the non-linearity in the B Eq. (81). Further choice of p = q leaves us with the same equation for B as in the sourceless case. Yet with q = 0, the source shows itself in the function as given in the sequel. The closed form of f is given by The foregoing expressions are not very impressive unless we provide concrete examples. This is our aim in the following section. Applications One immediate application of Eq. (81) is the extension of the f = R + 2α √ R in 3 + 1-dimensional vacuum to the gravity coupled to the global monopole [56] whose energymomentum at very large distance is given by (we assume A = B in the line element) in which η represents the global monopole charge. The solution for the metric is given by and Again we must have α = 0 and in the limit η = 0 one recovers the vacuum solution. in which Q is the electric charge. The solution with small α up to first order simply reads and Clearly by setting α = 0 one recovers the charged BTZ black hole with cosmological constant = C 2 . We add that the solution when the energy-momentum tensor of the matter source is of the form of a fluid, one should consider it as an interior solution. Therefore one has to make sure that the Israel junction conditions are satisfied at the interface between exterior and interior solutions [59]. In two examples we studied here, the energy-momentum tensors are longrange fields which allow us to consider our solutions to be exterior. Conclusion The integrability of f (R) vacuum gravity with spherical/circular symmetry is reduced first to a set of master relations, i.e., Eq. (11) and a master equation for B (r ), i.e., Eq. (13). Given any ansatz generating function F = d f d R in terms of the coordinate r , our method generates a solution pair of A (r ) , B (r ) and a solution for the function f (R). Most of our solutions encountered are of hybrid nature, that is, f (R) cannot be expressed explicitly in terms of R. The power-law form for instance, of the form f (R) ∼ R k , with a rational number k, is obtained easily in our method. Some of the solutions presented as applications are already known. Yet, new and rare types of solutions can also easily be obtained. In the second part of the paper, we extend the integrability of the vacuum case to the non-vacuum f (R) theories. For this case we found also a master relation, i.e., Eq. (80) and a master equation, i.e., Eq. (81). Considering f (R) to be the generating function for our formalism and T μ ν = diag [−ρ, p, q, q] to be our energy-momentum tensor one finds a solution to the master equation (81). We presented two examples. In the first example we set F = 1 + αr with a global monopole coupled to gravity in 3 + 1 dimensions. The second example considers the Maxwell electric field coupled to gravity in 2 + 1 dimensions with the same generating function. In this case we found the solutions approximately for small α. Let us also add that among the few examples we have studied we found some closed form of f (R) such as f (R) = R +2α √ R − 2 −2 and f (R) =ᾱ R 1− a 2 . We found that the first one cannot satisfy the conditions for the absence of ghosts and thermodynamic stability simultaneously, while the second one for specific a does satisfy the conditions. Finally it will be in order to state that our method of reduction for the spherically symmetric f (R) gravity has a large scope as far as solutions are concerned. Physical implications of the solutions obtained, such as a dark matter connection, are not considered in the present study. An extension of our formalism to problems with different symmetries, such as stationary axial symmetry, requires a separate investigation.
5,454.4
2016-06-01T00:00:00.000
[ "Mathematics" ]
QCD factorization for chiral-odd parton quasi- and pseudo-distributions We study chiral-odd quark-antiquark correlation functions suitable for lattice calculations of twist-three nucleon parton distribution functions $h_L(x)$ and $e(x)$, and also the twist-two transversity distribution $\delta q(x)$. The corresponding factorized expressions are derived in terms of the twist-two and twist-three collinear distributions to one-loop accuracy. The results are presented both in position space, as the factorization theorem for Ioffe-time distributions, and in momentum space, for quasi- and pseudo-distributions. We demonstrate that the twist-two part of the $h_L$ quasi(pseudo)-distribution can be separated from the twist-three part by virtue of an exact Jaffe-Ji-like relation. Introduction Parton distribution functions (PDFs) are the quintessential component of the modern picture of the hadron structure. The twist-two PDFs have a probabilistic interpretation as parton density distributions within hadrons and give rise to the leading contribution to the majority of observables. In contrast, the twist-three distributions are related to the quantum-mechanical interference between a single parton and a gluon-parton pair. Their contributions are usually suppressed by the hard scale, although they can be enhanced in certain kinematic regions. Nevertheless, twist-three effects are vital for the understanding of hadron structure at a quantum level. Quantifying twist-three effects is a challenging task. The lattice QCD has the potential to explore the twist-three distributions by a calculation of specially designed Euclidean observables. In particular, the nucleon matrix element of the local twist-three chiral-even operator of the lowest dimension was first calculated on the lattice in Ref. [1], and such studies will undoubtedly continue. The major difficulty of this approach is that due to the loss of continuous space-time symmetry, the twist-three operators on the lattice mix with the leading-twist operators of lower dimension so that the renormalization procedure becomes highly nontrivial. In recent years there has been increasing interest in the possibility to determine PDFs from custom-made Euclidean correlation functions, bypassing Wilson's operator product expansion. The general scheme of such calculations is to consider a product of suitable local currents at a space-like separation and match the lattice calculation of this quantity to the perturbative expansion in terms of collinear distributions. This can be done both in position and momentum spaces. Several proposals to implement this idea exist [2][3][4][5][6][7]. The particular choice of the correlation function of the quark and antiquark fields connected by the Wilson line [5] received the most attention, see, e.g., [8,9] for a review. In order to distinguish these objects from their light-cone analogs, one usually refers to such space-like correlation functions as pseudo-PDFs (e.g. [10]) or quasi-PDFs (e.g. [11]), depending on the actual implementation. The first lattice simulations of twist-three PDFs within the qPDF approach appeared recently [12,13], demonstrating the feasibility of such studies. The discussion of the collinear limit of the twist-three qPPFs and pPDFs presented in [14,15] is, however, not complete. In Ref. [16] we have formulated the factorization theorem for the axial-vector qPDF (and pPDF) to twist-three accuracy, which is more involved compared to the twisttwo case [17] and calculated the corresponding coefficient functions to one-loop accuracy. In this paper, we repeat the same analysis for the chiral-odd case and formulate the factorization theorem for the quasi(pseudo) distributions related to the twist-three PDFs h L (x) and e(x). As a byproduct of this calculation, we also derive the one-loop coefficient function for the twist-two transversity PDF δq(x), for which, it seems, there is no agreement in the literature (cf. [18,19]). This direction of studies is interesting as the chiral-odd twist-three PDFs e(x) and h L (x) are poorly known. They show up in observables that involve other chiral-odd functions and thus are challenging to extract from experimental data. As a matter of principle, these functions can be directly extracted from the single-and double-spin asymmetries in semi-inclusive deep inelastic scattering (SIDIS) and Drell-Yan process [20][21][22][23]. This is, however, problematic due to a lack of very precise data. Alternatively, twist-three distributions can be accessed in the small-b limit of transverse momentum distributions (TMDs). This approach looks more promising as the relevant observables are easier to measure, and hence the data is of better quality. For example, the Qiu-Sterman function T has been extracted in Refs. [24,25] from the analysis of the small-q T part of the transverse spin asymmetry data. Chiral-odd distributions can therefore be addressed similarly by studying the chiral-odd TMDs, see [26][27][28] for the relevant relations. In this way, the Qiu-Sterman projection of the underlying quark-gluon correlation function that is related to e(x) gives rise to the Boer-Mulders function and, thus, could be observed in the cos(2φ)-modulation in unpolarized Drell-Yan processes [29]. In a different context, the role of the twist-three PDF e(x) in the proton mass decomposition has also been discussed in [30,31]. In Ref. [16] we have pointed out that the extraction of the chiral-even twist-three PDF g T (x) from lattice calculations is complicated by the breakdown of the Wandzura-Wilczek relation at the qPDF (pPDF) level. Thus, the separation of twist-three contributions of interest from the dominant twist-two terms at the level of lattice data appears to be highly nontrivial. For chiral-odd distributions, the situation is better as the qPDF (pPDF) related to e(x) does not contain any twist-two admixture, and for h L (x) we are able to derive an exact Jaffe-Ji-like relation for the twist-two part. This work follows Ref. [16] both conceptually and methodologically, so we present only the necessary definitions and the final expressions for the factorization theorems and NLO coefficient functions and omit technical details. Sec. 2 is introductory, and it contains general definitions and notation. The tree-level factorization expressions in terms of quarkgluon correlation functions, QCD equation of motion (EOM) relations, and PDF definitions are given in Sec. 3. The Sec. 4 is devoted to the calculation of the NLO corrections and contains our main results. The position-space expressions are given in Sec. 4.1, and the momentum space ones are collected in Sec. 4.2 and Sec. 4.3 for the pPDFs and qPDFs, respectively. The concluding Sec. 5 contains the summary and a short discussion. Two appendices supplement the main text: App. A contains the evolution kernel for the twistthree chiral-odd distributions, and in App. B the operator-level results are presented for the first order expansion in α s of the chiral-odd quark-antiquark correlation functions in position space. Preliminaries We consider the nucleon matrix elements of a product of quark and antiquark fields for chiral-odd Dirac structures in the forward limit, and Γ = 1l or Γ = iσ µν γ 5 = − 1 2 µναβ σ αβ (in four dimensions). Here z µ is a four-vector, |p, s stands for the nucleon state with momentum p and spin s, q andq denote the quark and antiquark fields, respectively, and [z, 0] is the straight Wilson line in the fundamental representation of the gauge group rendering the operator gauge invariant, We follow the convention γ 5 = iγ 0 γ 1 γ 2 γ 3 and σ µν = i 2 (γ µ γ ν − γ ν γ µ ). The flavor indices of the quark fields are omitted for brevity. For space-like separations z 2 < 0, which are of interest in the context of lattice calculations in Euclidean space, the time-ordering in Eq. (2.2) is in fact redundant. In this work, we calculate the matrix elements (2.1) at the next-to-leading order (NLO) and twist-three accuracy in QCD taking the collinear parton distributions as the nonperturbative input. Note that in the forward kinematics considered here, the matrix elements only depend on the distance z between the fields. Thus, without loss of generality, we keep the quark field q at the origin. We tacitly assume that the nonlocal operator (2.2) is renormalized, The renormalization constant Z O in the MS scheme is known to three-loop accuracy [32,33]. It is the same for all Dirac structures. Both sides of Eq. (2.4) depend on the renormalization scale µ R . Here and in the following, we do not indicate the dependence on µ R , unless it is important for understanding. In renormalization schemes with an explicit regularization scale, the Wilson line in Eq. (2.2) suffers from an additional linear ultraviolet divergence [34] which has to be removed. This can be done by introducing a residual mass term that absorbs such divergences, as in the case of heavy quark effective theory, or, alternatively, by forming a suitable ratio of matrix elements involving the same operator [35,36]. This issue has been discussed extensively in the literature so that in our opinion no further explanation is needed. The nucleon matrix element (2.1) can be parametrized in terms of several invariant functions. We define Here σ µz = σ µν z ν , M 2 = p 2 is the nucleon mass squared, s µ is the spin vector normalized as s 2 = −1, (s · p) = 0 for transversely polarized nucleon state, and the dimensionless parameter The subscript T indicates the projection of the nucleon spin vector onto the transverse plane, which is orthogonal to both z µ and p µ : with (2.8) We refer to the variable as the Ioffe time [37]. The similarly defined (regularized) invariant functions for the lightlike separations z 2 = 0 give rise to the chiral-odd parton distributions in position space (see below) are called Ioffe-time distributions (ITDs) [35,37,38]. The terminology for the corresponding off-light cone position-space matrix elements is not well established: they are referred to as quasi-ITDs in [16,36] and pseudo-ITDs in [39][40][41]. In this work, we use the abbreviation qITD for the functions E(ζ, z 2 ), H 1 (ζ, z 2 ) and H L (ζ, z 2 ). The three qITDs E 1 , H 1 and H L at small z 2 match collinear parton distributions (PDFs). QCD factorization for E and H L is more complicated as compared to H 1 , as both of them recieve contributions of twist-three quark-antiquark-gluon collinear distributions. In this paper we formulate the factorization theorem for E and H L and calculate the necessary coefficient functions to one-loop accuracy. Our method of calculation is based on the light-ray operator product expansion (OPE) [42,43] in combination with the background field technique [44,45]. We used the same approach for chiral-even distributions in our previous publication [16], where it is described in details. The present case only differs from Ref. [16] by the Dirac structure Γ, so that the calculation follows the same route. The actual computation is done in position space. The coefficient functions for qPDFs and pPDFs in momentum fraction space are then obtained by the appropriate Fourier transform, as defined in the secs. 4.2 and 4.3. Chiral-odd parton distributions and quark-antiquark-gluon correlation functions At the tree level, the operators (2.2) in the z 2 → 0 limit can be identified with the corresponding light-ray operators, cf. [16], whose nucleon matrix elements define parton distribution functions. In this work, we need the following light-cone matrix elements Here and in what follows, we use the "hat" notation for the PDFs in position space (Ioffetime distributions [37,38]). Thus, at the tree level (and neglecting terms O(z 2 )) we have The transversity PDF δq(x) is purely twist two. For x > 0 and x < 0 it defines the distribution of the (transversely polarized) quarks and antiquarks (with a sign minus) in the nucleon, respectively. In contrast, the function h L (x) can be decomposed into twist-two and twist-three parts, according to the (geometric) twist of the contributing operators: One obtains for the twist-two part [46], A simple method to derive these relations is the following. The leading-twist contribution of the light-ray operator must satisfy the equation [47], Taking the nucleon matrix element of this operator identity and its parametrization in (2.5), (3.1), (3.2), one obtains a first-order differential equation (see e.g. [27]) It is easy to check that the expression in (3.6) satisfies this equation. Beyond the tree level, the PDFs become scale-dependent and the tree-level relations in (3.4) become decorated by the perturbatively calculable coefficient functions. In addition, the twist-three quark-antiquark distributions e and h tw3 L get mixed with the threeparticle quark-antiquark-gluon correlation functions giving rise to additional contributions that do not have a probabilistic interpretation. It is important to realize that quarkantiquark and quark-antiquark-gluon contributions cannot be taken as independent degrees of freedom as they are related by QCD equations of motion (EOM). Neglecting operators with total derivatives that do not contribute to forward matrix elements, one obtains [48] These operator relations are exact in QCD with massless quarks so that they must be satified by all matrix elements. In the first line, the first term on the r.h.s. gives rise to the twist-two distribution h tw2 L , and the second term provides a representation for h tw3 L in terms of a certain integral of the quark-antiquark-gluon correlation function defined below. The second relation gives rise to a similar representation for e(x) in terms of a (different) quark-antiquark-gluon correlation function, up to an additional local term. It is convenient to consider the full set of quark-antiquark-gluon operators as the operator basis at the intermediate step. 1 Using (3.9), a part of the resulting expressions can be brought back to the two-particle form, but this reduction does not hold for the full result. Nucleon matrix elements of the three-particle twist-three operators define positionspace quark-antiquark-gluon correlation functions: p, s|q(az) σ µz γ 5 gF µz (bz)q(cz)|p, s = 4λ z ζ 2 M H(aζ, bζ, cζ) . (3.10) They are related to the correlation functions in momentum fraction space by the Fourier transformation (3.14) The functions H and E obey the discrete symmetry relation [27] H( As a consequence, for their position-space versions E and H the following relations hold, for any non-singular function w(α). These relations are essential to simplify the weight factors in the integrals, see Sec. 4. The relation x 1 + x 2 + x 3 = 0 in (3.14) imposed by the translational invariance in the forward kinematics implies that the twist-three PDFs are functions of two variables. However, the three-variable notation used here is more convenient for many reasons. First, it simplifies the symmetry relations (3.15), (3.16). Second, twist-three correlation functions have different parton interpretations for each kinematic domain x i ≶ 0 [50] and can most naturally be presented using the three-component barycentric coordinates, see [51,52]. With these definitions, we obtain for the twist-three PDFs where Σ q is the local contribution related to the nucleon σ-term, It is often convenient to separate the local contribution (3.21) from the rest. In what follows we use the subscript "nl" (nonlocal) for quantities in which the sigma-term contribution is subtracted, e.g., Going over to the momentum fraction space, one obtains In these expressions one can get rid of some integrations using the delta-functions, but the resulting expressions contain a multitude of integration regions, therefore we prefer to leave it in this form. We stress that these relations are exact in QCD. The scale dependence of the correlation functions E and H is autonomous, in the sense that it does not involve other distributions. The evolution equations take the form where the kernels K E/H are functions of six variables x = {x 1 , x 2 , x 3 } and y = {y 1 , y 2 , y 3 }. The leading-order (LO) expressions of the evolution kernels can be found in [51,53]. They are relatively compact in position space but rather lengthy in terms of the momentum fractions. For completeness, we present the position-space kernels [51] in App. A. As a final remark, in Refs. [27,28] a different notation for the chiral-odd twist-3 correlation functions is used, where µν T = p α z β αβµν /ζ (and z 2 = 0). The relation to our notation is δT g = H and δT = E. QCD factorization and one-loop results Beyond the tree level, the expressions in (3.4) can be generalized to the following factorization theorems schematically: where the ellipses stand for terms O(z 2 ), and the coefficient functions C k are given by a series expansion in the QCD coupling We write with the tree-level expressions The validity of QCD factorization in this form is a direct consequence of the existence of the operator product expansion (OPE). The factorization theorems for pseudo-and quasi-distributions in momentum space are obtained by the Fourier transform of the above expressions and do not require any additional justification. Explicit expressions are given below. The main purpose of this work is to compute the coefficient functions C k . The calculation is done using the light-ray OPE and the background field technique, following the similar calculation for the chiral-even case in Ref. [16]. Thus we present here only the final expressions for various distributions and omit technical details. The operator level expressions are collected in App. B. As already discussed in Sec. 3, the twist-three PDFs h tw3 L and e are related to the more general three-particle distributions by Eqs. (3.19), (3.20). Conversely, these identities allow one to rewrite a part of the three-particle contributions back to the two-particle form. This can be done as follows. Integrating Eqs. (3.19), (3.20) with an arbitrary weight function f (τ ) one finds after simple algebra These relations can be inverted to give Thus the reduction to a two-particle form is possible for arbitrary weight functions g(α), but requires a very specific integration weight over β, i.e. over the position of the gluon field. Note that this weight factor can sometimes be simplified by virtue of the symmetry relations in Eqs. We use these relations to rewrite the quark-antiquark-gluon contributions in terms of h tw3 L and e whenever possible, as it allows one to reduce the nonperturbative input. Position space: Ioffe-time distributions The position space results are straightforward to derive from the light-ray OPE expressions in Appendix B, subtracting singular 1/ terms and taking the nucleon matrix elements. In the following expressions 11) and the plus-distribution is defined as usual, The qITD H 1 at NLO reads with C F = (N 2 c − 1)/(2N c ) being the quadratic Casimir operator of a general SU (N c ) gauge theory. The qITD H L splits into twist-two and twist-three terms The corresponding expressions are where C 1 is given in Eq. (4.14), and L,tw2 (α, L z ), i.e. the coefficient function of the two-particle twistthree contributions is not equal to the twist-two coefficient function. The term ∼ C (1) L,3pt ⊗ H presents a "genuine" three-particle contribution in the sense that it cannot be rewritten in terms of h tw3 L using (4.8). It contains a logarithmic contribution ∼ L z suppressed by a color factor 1/N c , in agreement with [54]. but also a color-enhanced term ∝ N c independent of L z . For the scalar case we get We have checked that the resulting scale dependence of the coefficient functions is in agreement with the twist-three evolution equations for (chiral-odd) three-particle distributions, see App. A. Note that the evolution kernels P tw3 (4.20) for E and H coincide as these distributions satisfy the same evolution equation (A.2). The expressions for the non-logarithmic parts, C L,3pt (4.19) and C (1) S,3pt (4.23), are also the same up to the last term, −ββ vs.β, in the square brackets. The Σ-term in Eq. (4.21) does not receive a logarithmic contribution ∼ L z as the oneloop scale dependence of the quark condensate is cancelled against the scale dependence of the renormalized nonlocal operator (2.2). To one-loop accuracy [55] the renormalization constant reads, Momentum space: quasidistributions Following Refs. [5,17] we define qPDFs as Fourier transforms of the qITDs with respect to the distance z ≡ |z|. The orientation of the vector z µ is fixed: where v µ = z µ /|z| is the unit vector along z µ , and p v = (p · v). The derivation of the one-loop coefficient functions for the qPDFs is more involved due to the terms containing L z (4.11), which are also the ones responsible for extending the support property of qPDFs beyond the partonic region |x| < 1 to |x| < ∞. To avoid complications of the direct Fourier transformation, we use the identity and similar for h L and e. Starting from this identity, ond can obtain a relatively simple relation between the coefficient functions for the qPDFs and pPDFs. Details of the derivation and explicit form of this relation can be found in App. B of Ref. [16]. In the expressions given below we use the following notation: and We obtain x y , L p e nl (y) (4.43) The two-particle coefficient functions are where it is tacitly assumed that all contributions ∼ δ(x) (e.g., in (4.14)) are included. The three-particle coefficient functions are In these expressions (4.50) The integrals over delta-functions can be evaluated, revealing 30 distinct integration domains for the "genuine" three-point contribution each of which has a distinct coefficient function. The qPDF h 1 (x, p v ) has previously been considered in Refs. [18,19]. Our expression for the one-loop coefficient function agrees with the result of [18] 2 , but does not agree with the expression derived in [19]. Our result in (4.44) reads in explicit form: (4.51) Here the first term ∼ δ(x) depends on the normalization scheme, see e.g. [17]. In particular, it drops out if one uses the short-distance normalization scheme. Discussion We have formulated the factorization theorem for the chiral-odd space-like correlation functions (2.5) in terms of parton distributions to the twist-three accuracy and calculated the corresponding coefficient functions at NLO. These objects can be computed by lattice QCD methods, see e.g. Refs. [13,18], granting access to the chiral-odd twist-three functions. The utility of these studies depends, however, on the possibility to subtract the twist-two contribution and reveal the desired twist-three part. A paradigm case is provided by the DIS structure function g 2 (x, Q 2 ) for which the twist-two contribution can be calculated in terms of g 1 (x, Q 2 ) and subtracted from the experimental data, at least in principle. The situation with qPDFs and pPDFs is more complicated. As we have demonstrated in Ref. [16], the Wandzura-Wilczek relation does not hold for the chiral-even axial-vector qITDs and, hence, neither for pPDFs and qPDFs. Thus the twist-two contributions cannot be subtracted exactly, but only up to a certain order in perturbation theory. The situation for the chiral-odd distributions turns out to be more encouraging. The distribution H L includes a twist-two contribution (4.16). However, at one loop it involves the same coefficient function C (1) 1 (α, L z ) (4.14) as for H 1 . This equality is not accidental and is a consequence of the fact that in the chiral-odd case no additional tensor structure can arise from loop integrals as H tw2 L and H 1 are just different Lorentz projections of the same twist-two operator with an open vector index. This is in contrast to the chiraleven case where terms ∼ z µ z ν /z 2 (in addition to g µν tensor that is present already at the tree-level) emerge via loop-integral giving rise to additional contributions [16]. This argument applies to all orders in perturbation theory, so that the equality of the coefficient functions for H tw2 L and H 1 is exact. As a consequence, the twist-two contribution to H L in position space distributions can be eliminated by the analogue of the Jaffe-Ji relation [46] H tw3 cf. (3.6). The same subtraction can be implemented in pPDFs (here x > 0): but cannot be applied directly to qPDFs. The reason is that in this case the Fourier transformation is applied to the complete z-dependence without distinguishing whether it originates from ζ = (p · z) or z 2 . As the result, the relevant analogue of Eq. (5.1) involves z-dependence in the logarithms L z and after the Fourier transformation In principle, the twist-two remainder on the r.h.s. of this relation can be subtracted up to O(a 2 s ) by promoting δq(x) → h 1 (x). As well known, the extraction of the momentum-fraction dependence of the PDFs from lattice calculations requires hadron sources with large momentum, which is a notorious problem. Large momenta are not necessary, however, to access an overall normalization of the twist-three contributions encoded in the matrix elements of the local twist-three operators of the lowest dimension. To this end the ζ → 0 expansion of the qITDs is sufficient. One obtains where p, s|qσ µn γ 5 iD n , gF µn q|p, s = 4 with all fields assumed to be at the origin and n 2 = 0. Equivalently To conclude, in this work we have presented, for the first time, the complete NLO analysis of the chiral-odd quasi-and pseudo-distributions of the nucleon to the twist-three accuracy. The results are encouraging so that we expect that the chiral-odd twist-three PDFs can be constrained from lattice calculations in the near future. Acknowledgments This study was supported by Deutsche Forschungsgemeinschaft (DFG) through the Research Unit FOR 2926, "Next Generation pQCD for Hadron Structure: Preparing for the EIC", project number 40824754. Y.J. also acknowledges the support of DFG grant SFB TRR 257. A Evolution kernel for twist-3 distributions The evolution equations for twist-three quark-antiquark-gluon distributions can be found in Ref. [51,53]. For readers' convenience, we collect the relevant expressions in this appendix. The evolution equation for the quark-antiquark-gluon correlation functions in position space has the form where H is an integral operator (evolution kernel) which is the same for H and E: The last contribution, proportional to 3C F , is universal to all distributions and cancels against the renormalization scale dependence of the qPDF operator. The corresponding expressions for distributions in momentum space are lengthy as the evolution kernels in different sectors x i ≶ 0 are not the same. Explicit expressions can be Î Î Ï Figure 1. Contributions of different topology to the light-ray OPE of the quark-antiquark operator found in [53]. Note that the evolution kernel (A.2) becomes distinct for H and E once the symmetry relations (3.15), (3.16) are applied. The twist-three quark-antiquark distributions h tw3 L and e nl are given by the integrals (3.19), (3.20). Application of the operator H to these expressions yields Here, we have used relations (3.15), (3.16) to simplify the result. In both expressions, the contributions in the second line the gluon field is positioned in between the quark and the antiquark. These terms can be rewritten as a convolution of two-particle kernels with h L and e nl . The remaining terms ∼ 1/N c contain the gluon field outside of the quark-antiquark pair. They cannot be simplified in this way and contribute to the "genuine" three-particle part. B Light-ray OPE at twist three One-loop contributions to the light-cone expansion of the nonlocal operators into Eq. (2.2) can be divided in several classes: Vertex corrections V and V , "exchange" diagrams E with a hard gluon connecting the quark and the antiquark, and the self-energy correction W to the Wilson line as shown schematically in Fig. 1. Each contribution is separately gauge-invariant. In this appendix we collect the corresponding expressions for twist-three contributions in d = 4 − 2 dimensions, i.e. before the subtraction of collinear singularities. We write the result in the form where the tree-level expressions are given in Eq. (3.9), and (B.2) We obtain etc. The two-particle contributions in these expressions can be rewritten in terms of the three-particle operators using equations of motion (3.9). Further, with Γ = {1, iσ pz γ 5 } for W e and W h , respectively.
6,466.6
2021-08-06T00:00:00.000
[ "Physics" ]
Verbal Projection of Comparable Translations of a Detective Story Based on the systemic functional framework, this paper attempts to compare verbal projection in two comparable translated texts of a detective story entitled A Scandal in Bohemia , one from the early 20th century (henceforth TT1) and the other from the early 21st century (henceforth TT2). Approximately one hundred years apart, these two translations are strikingly different in their language use, with classical Chinese being used in TT1 and plain (colloquial) Chinese being used in TT2. By analysing and comparing the lexicogrammatical features of the verbal clauses in the two translated texts, this paper summarises the choices made by the translators in these two different historical moments: when translating the source text, TT1 translators show more flexibility by incorporating more addition and omission into their translation than TT2 translators. investigating verbal clauses. Halliday and Matthiessen (2014: 302) mention that "clauses of saying are an important resource in various kinds of discourse, and they contribute to the creation of narrative by making it possible to set up dialogic passages" and "when narrative passages are constructed in conversation, verbal clauses are often used to develop accounts of dialogue on the model of 'x said, then y said' together with quotes of what was said". This pattern is frequently identified in detective stories, and certainly serves as an important resource for exploring verbal clauses and comparing Chinese translations. This also explains why this paper focuses on verbal projection instead of mental projection, although the latter is also a resource of projection. Therefore, of the various aspects which are encompassed in detective stories, this paper focuses on one aspect in particular, namely verbal clauses. Projection typically involves "a projecting clause and a projected clause, or a combination of projected clauses" (Matthiessen and Teruya, 2013: 51. In the case of quoted speech, "the projecting clause includes a verb of 'saying' , the most common in English being say, and the projected passage is fairly unrestricted in terms of speech function" (Matthiessen and Teruya, 2013: 51). There are in fact three systems involved in the differentiation of different kinds of projection: (i) the level of projection (idea vs. locution), (ii) the mode of projection (hypotactic reporting vs. paratactic quoting), and (iii) the speech function (projected proposition vs. projected proposal) (Halliday and Matthiessen, 2014: 509) Level of projection and mode of projection intersect to define four types of projection nexus: quoting direct speech, reporting indirect speech, reporting speech and quoting thought. As reporting indirect speech and reporting speech are projected by mental processes, quoting direct speech and quoting thought, which are projected by verbal processes, will be the main categories studied in this paper. Two examples are chosen from the text of A Scandal in Bohemia to illustrate these two categories: (1) When verbal clauses project indirect speech (hypotactic) I had been told that it would certainly be you. (2) When verbal clauses project direct quotation (paratactic) "It is quite a pretty little problem," said he. As stated in Halliday and McDonald (2004), in Chinese, the prototypical verb in verbal clauses is 'say' , which is used in general contexts, e.g., Ni shuo shenme? You said what? Shuo projects quoted speech in all speech functions, and can be added to verbs in other process types to enable them to project: Ta xiaozhe shuo, 'Ni bie lai shuo zhetao' . He said, laughing: 'Don't say that to me.' There are different verbs of saying, some with additional circumstantial features, which may influence translations. The meanings of some typical verbs of saying are generalised below (adapted from Halliday and Matthiessen, 2014: 514). For instance, when we know that reply means 'say in response' , we are in a better position to explain why the Chinese translation can be '回答' (hui da; reply) or '回答说; 回答道' (hui da shuo; hui da dao), with the latter implying the meaning of 'response' . Wierzbicka (1987) provides a comprehensive and most detailed analysis of the verbs of saying, which are categorised into 37 groups. For instance, the verb 'ask' is one of the most common verbs in English, but we need to differentiate between 'ask' in the sense of 'asking a question' and 'ask' in the sense of 'asking someone to do something' (Wierzbicka, 1987: 66). In terms of projection, the first meaning is often adopted to ask questions when projecting direct speech. Matthiessen and Teruya (2013) undertake an important investigation of projection in terms of the quoting strategies used in English, but there are few other works on projection, particularly within the Chinese context. There are even fewer studies on verbal projection, but Zeng (2006), Liang and Zeng (2016), and Zeng and Liang (2019) contribute to the study of projection in both English and Chinese. Despite an abundance of translations of various works since the Late Qing period, studies on these translations are comparatively limited, with only a few sporadic papers and MA theses considering early translation in the Late Qing period. For instance, Zhang (2010) explores the translation of detective stories in the late Qing period from the perspective of polysystem theory; Yu (2004) discusses the significance of China's modern translation of detective stories; Zhang and Lin (2006) provide an assessment of the partial translations of The Complete Sherlock Holmes;Zhang (2002) focuses on the two translation upsurges of detective stories in China. However, few studies have been undertaken into either later translations or the comparative analysis of different translations. Therefore, this study attempts to fill this gap by comparing two translations of a detective story from different time periods. Research Methodology and Two Comparable Translations This section presents the methods adopted in the current research. This study falls within the broad scope of descriptive translation studies (DTS), a methodology developed by Toury (1995). DTS dates from the early 1970s when translation studies claimed itself to be a scientific and independent study by Holmes (1972Holmes ( /1988. The term descriptive is the opposite of prescriptive, signalling "the rejection of the idea that the study of translation should be geared primarily to formulating rules, norms or guidelines for the practice or evaluation of translation or to developing didactic instruments for translator training" (Hermans, 1999: 7). Besides, DTS moves translation studies from prescribing 'good' or 'correct' translation to describing and explaining actual translation behaviour, and it is only "through studies into actual behavior that hypotheses can be put to a real test" (Toury, 1995: 16-17). In other words, a descriptive study does not involve value judgment, and the current study does not intend to indicate whether one translation is better or worse than the other, but its aim is only to describe the linguistic features in both translations without subjective preference. Therefore, descriptive research is designed to obtain information concerning the current status of a phenomenon and its aim is to describe what exists at the time of the research (Ary, 1979: 295), which is more scientific and objective. As indicated by Toury, DTS "aspires to offer a framework for individual studies" (Toury, 1995: 11), and therefore is understood to be a model that sets guidelines for research on actual translation problems. Because DTS focuses on "observable aspects of translation, it has also been called empirical" (Hermans, 1999: 7). Located within the framework of DTS, this is an empirical study, which conducts a descriptive investigation into the Chinese translations of a detective story. Since this study involves two translations from different time periods, a comparative study will be conducted. According to Toury, there are three types of comparison in translation studies: first, comparing parallel translations into one language, which came into being during different periods of time; second, comparing different phases of the emergence of a single translation; third, comparing translations into different languages (Toury, 1995: 73-74). The current study focuses on the first type of comparison. With a view to investigating the features of verbal clauses in a detective story and comparing the differences between two Chinese translations -the 1917 translation, i.e. the early 20th century (literary language or classical Chinese) (henceforth TT1) and the 2011 translation, i.e., the early 21st century (plain language or colloquial Chinese) (henceforth TT2) -A Scandal in Bohemia written by Conan Doyle (henceforth ST) and its two Chinese translations are selected for analysis and comparison. This short story, A Scandal in Bohemia, depicts Sherlock Holmes' solution to an assignment posed by the King of Bohemia. Holmes' assignment is to recover a photograph of the King with his lover, Irene Adler, who is threatening to disclose her relationship with the King by showing this photograph to the King's bride at their wedding. Translated by Changjue and Xiaodie, the title of TT1 is Pretty Figure (qiàn yǐng), which is markedly different from the original title. However, the translation into Bohemian scandal (bō xī mǐ yà chǒu wén) by Chen Yulun (TT2) is more faithful to the original title. In order to explore the differences and similarities between the two Chinese translations, this paper focuses on the lexicogrammatical features of these two translations. To analyse and compare the two translations, TT1 and TT2, of A Scandal in Bohemia, this paper adopts systemic functional theories. Informed by systemic functional linguistics (SFL), a language is a complex semiotic system with various levels or strata (Halliday and Matthiessen, 2014: 24). SFL consists of five strata -phonetics, phonology, lexicogrammar, semantics and context. Comparatively speaking, lexicogrammar serves as the basis for textual analysis, by reflecting various lexicogrammatical features. So, this paper focuses on the lexicogrammatical stratum for analysis and comparison. Another important concept of SFL is metafunction, of which there are four types, i.e., textual, interpersonal, experiential and logical metafunctions, which are intrinsic to language. In order to compare the similarities and differences of the two translations, this paper conducts an analysis of all four metafunctions. More specifically, it begins with a thematic analysis (textual metafunction), then a mood and modality analysis (interpersonal metafunction), then a transitivity analysis (experiential metafunction) and, finally, a logico-semantic analysis (logical metafunction). Case Study of A Scandal in Bohemia All verbal clauses projecting direct or indirect speech were extracted from the text, which created a corpus of 70 verbal clauses projecting direct speech and 12 projecting indirect speech.1 In this section, analyses will be presented to compare the differences between TT1 and TT2. Thematic Analysis On the basis of the Thematic analysis of the ST, TT1 and TT2 and the comparisons between them, the following observations can be made: (1) In terms of textual Theme, there is only one textual Theme in the ST and TT2, but 7 textual Themes in TT1 (see Table 2). (2) In terms of interpersonal Theme, there are no interpersonal Themes in the parallel texts when the verbal clauses project direct speech. (3) In terms of topical Theme, it can be seen from Table 3 that: -Some elliptical topical Themes are identified in the three texts. 1 In this paper, only the 70 verbal clauses projecting direct speech will be analysed. 'Holmes' or 'the King' , and this differs from the ST. We find that the Chinese translations tend to give more explicit information to the reader about who has said or asked something, but another reason for this is that the verbs of saying cannot be thematised in the same way as in the English text. -All the topical Themes in TT1 and TT2 are unmarked, while in the ST more than half are marked in the English system with verbs of saying. -One point which requires further exploration are the circumstances under which verbs of saying are used as marked themes and whether they are consistently used in this way. This can help to determine whether such verbs of saying in narrative texts should be considered as being marked or unmarked. Mood and Modality Analysis Based on the Mood and Modality analysis of the ST, TT1 and TT2 and the comparisons between them, the following observations can be made: (1) In terms of FREEDOM (whether the clause is free or bound), all the clauses are free in the ST, except clause 59 which is bound, and all the clauses are free in TT1 and TT2. (2) In terms of MOOD TYPE,2 all the clauses are indicative: declarative in this corpus. (3) In terms of POLARITY (whether the clause is positive or negative), all the clauses are positive in this corpus. (4) In terms of DEICTICITY (whether the clause is temporal or modal), all the clauses are temporal in this corpus. (5) As seen in Table 4, the Subject in the ST differs from the Theme in the ST, while in unmarked declarative clauses, the Subject conflates with the Theme; but as the ST has quite a number of marked declarative clauses with verbs of saying, this differs in terms of the Subject and topical Theme. However, the distribution of the Theme and Subject remains the same in the two Chinese translations, as the verbal clauses in Chinese are unmarked declarative clauses. (6) In terms of Finite verbs, as shown in Table 5, the most frequently used verb of saying is 'said' . This is a similar feature of the translations, with the translations using '曰' (PY: yuē; BT: say) in TT1 and '说' (PY: shuō; BT: say) in TT2. Transitivity Analysis On the basis of the Transitivity analysis of the ST, TT1 and TT2 and the comparisons between them, the following observations can be made: (1) The Sayer is the same as the Subject in this corpus. (2) Since this study's corpus is comprised of verbal clauses, the Process type is verbal, as indicated by the various verbs of saying, which are same as the Finite verbs in this corpus. (3) Six circumstantial elements are found in the verbal clauses of the ST, but in the translations these elements have been changed, with some being ignored in the translation process, while others are changed to a separate clause. 3.4 Logico-Semantic Analysis This section investigates projection in clause complexing (at the clause complex level), that is, the logico-semantic relations between the projecting clauses and the projected clauses in the two Chinese translations. Based on the system of clause complexing, the systems of TAXIS and LOGICO-SEMANTIC RELATION intersect to define a basic set of clause nexuses. (i) TAXIS (degree of interdependency): hypotaxis/parataxis. All clauses linked by a logico-semantic relation are interdependent: that is the meaning of relational structure -one unit is interdependent on another unit. Two clauses related as interdependent in a complex may be treated as being of equal status, or as being of unequal status. Degree of interdependency is known technically as taxis; and the two different degrees of interdependency as parataxis (equal status) and hypotaxis (unequal status). Hypotaxis is the relation between a dependent element and its dominant, the element on which it is dependent. Contrasting with this is parataxis, which is the relation between two like elements of equal status, one initiating and the other continuing. The distinction between parataxis and hypotaxis has evolved as a powerful grammatical strategy for guiding the rhetorical development of text, making it possible for the grammar to assign different statuses to figures within a sequence. The choice between parataxis and hypotaxis characterises each relation between two clauses (each nexus) within a clause complex; and clause complexes are often formed out of a mixture of parataxis and hypotaxis. (Halliday and Matthiessen, 2014: 440-441) (ii) LOGICO-SEMANTIC RELATION: expansion/projection. There is a wide range of different logico-semantic relations any of which may hold between a primary and a secondary member of a clause nexus. But it is possible to group these into a small number of general types, based on the two fundamental relationships of (1) expansion:3 the secondary clause expands the primary clause, by (a) elaborating it, (b) extending it or (c) enhancing it; and (2) projection: the secondary clause is projected WANG 10.1163/26660393-bja10016 | Contrastive PragmaticS (2020) 1-27 through the primary clause, which instates it as (a) a locution or (b) an idea. (Halliday and Matthiessen, 2014: 443) Within the general categories of expansion and projection, we first recognise a small number of subtypes, as shown in Table 6. Since this paper focuses on verbal projection, we only consider the representation of the content of saying (locutions) rather than the content of thinking (ideas). With respect to the mode of projection, on the basis of the 70 verbal clauses projecting direct speech and the 12 projecting indirect speech that were Table 6 Types of logico-semantic relations One clause is projected through another, which presents it as a locution, a construction of wording (says) idea ' (single quotation marks) One clause is projected through another, which presents it as an idea, a construction of meaning (thinks) Source: adapted from Halliday and Matthiessen (2014: 444) identified in this text, it is safe to say that we have 70 examples of parataxis and 12 examples of hypotaxis. By analysing the logico-semantic relations of verbal projection, we are able to identify the dominant logico-semantic patterns (e.g., hypotactic enhancement, paratactic extension or hypotactic projection/idea). Of the 82 verbal clause complexes, the distribution of hypotaxis and parataxis is shown in Table 7: From Table 7, we find that in the ST, TT1 and TT2, parataxis occurs more frequently than hypotaxis, and the occurrence of parataxis and hypotaxis in TT1 is less than in the ST and TT2. As an example, let us now consider one clause complex consisting of two clauses and one clause complex of three clauses. The distribution of the logicosemantic types is shown in Table 8: Table 8 Comparison of the logico-semantic types in the ST, TT1 and TT2 ST TT1 TT2 2 clauses "1 ^ 2 (39 times) 1 ^ " 2 (27 times) "1 ^ 2 (34 times) 3 clauses "1 ^ 2α ^ 2×β (12 times) 1 ^ +2 ^ "3 (3 times); 1 ^ " 2×β ^ "2α (2 times) "1 ^ 21 ^ 2+2 (9 times) The following two examples demonstrate the differences between TT1 and TT2: From this example, we find that TT2 follows the ST closely, sharing the same logico-semantic pattern of projected clause ^ projecting clause. However, TT1 differs in terms of the sequence of the projecting clause and projected clause, and the reason for this is that, in classical Chinese, the sequence of the projected clause and projecting clause cannot be reversed. Therefore, its logico-semantic patterns differ from those seen in the ST and TT2. This example shows the pattern in the ST: projected clause ^ projecting clause ^ hypotactic dependent clause that qualifies the projecting clause. TT2 also follows the ST closely, while TT1 differs in that the circumstantial element is changed to a separate clause, which is understood as extension rather than enhancement. In this section, various analyses at both clause and clause complex levels have been conducted, revealing three findings. Firstly, from the analysis of textual Theme, interpersonal Theme and topical Theme, differences and similarities in the thematic choices are examined in the two Chinese translations. The position of the verbs of saying in the verbal clauses demonstrates linguistic differences between English and Chinese. Verbs of saying (Process) can serve as the topical Theme in English (highly marked), but is impossible in Chinese, which leads to different thematic choices. Secondly, from the analysis of mood and modality, we find the most frequently used subjects in the text are 'he' , 'Holmes' and 'I' , which shows that the main character in the text is Holmes, and 'I' is used frequently in dialogues. Thirdly, three frequently used logico-semantic relations are identified in the source text: paratactic quoting, hypotactic enhancement and paratactic extension, and the preferred logicosemantic patterns are generalised as follows: (i) "1 ^ 2; and (ii) "1 ^ 2α ^ 2×β. Also some logico-semantic relations are changed in the translation, particularly in TT1. For instance, "1 ^ 2 in the ST is changed to 1 ^ "2 in TT1, while TT2 keeps the same sequence. Enhancement in the ST is sometimes changed to extension in TT1, in which a certain circumstantial element is changed to a separate clause. Translators' Choices in TT1 and TT2 In this section, we attempt to compare the choices made by the translators of TT1 and TT2 with the source text. Different choices are made by the translators, as seen in the two Chinese translations, and these differences will be illustrated by examples. (1) Addition: This can be addition of projected clauses or projecting clauses. In this paper, addition refers to either the addition of comments by the translator in the projected clause, the addition of explanations of reasoning in the projected clause, or the addition of a connecting link in the projecting clause. For instance: Example 3 shows the addition of comments in the projected clause. ST "Wedlock suits you," "1 he remarked. 2 he said TT2 closely follows the ST, and the logico-semantic relations stay the same as in the ST. However, TT1 adds a great deal of information that does not exist in the ST. This added information are the comments or opinions of the translator. In this example, the clause '时福已顾予笑曰' (BT: Then Holmes said to me with a smile) is not considered to be addition, because we can find the corresponding information in the ST in the previous context: 'Then he stood before the fire and looked me over in his singular introspective fashion.' Since certain information is added in the translation, logico-semantic relations are used to expand the translation, and it can be seen that the translator actively uses logico-semantic relations to link this newly-added information. In this example, the logico-semantic relations of paratactic extension and hypotactic enhancement are used to link the clauses '華生醫生, 久不見君矣, 吾聞閨房 之中, 實有鎖鍵, 人而得妻, 即如猿猱之被桎梏, 君其一也' (BT: Doctor Watson, long time no see you, I heard in the wedding room, there is actually a lock, a person gets a wife, like an ape being shackled, and you are one of them), and these clauses are new material added by the translator. Example 4 shows the addition of repetitive information or the emphasising of information mentioned in the projected clause. "2+2α×2 dì zhǐ God will BT: but being able to look rather than observe is actually breaching God's will In this example, TT1 adds information that does not exist in the ST: '上帝既 付吾人以目, 即宜观察并用, 但能观而不能察者, 实违帝旨' (BT: The God gives us eyes, then we should use looking and observing together, but being able to looking rather than observing, is actually breaching God's will). This information is concerned with how God has guided Holmes' eyes to observe the inside of Watson's left shoe, and this information is used to demonstrate Holmes' logical reasoning process. (2) Omission: This can be the omission of the projected clause, the projecting clause or both. Example 6 shows the omission of both the projecting clause and the projected clause. Example 6 ST: You did not tell me that you intended to go into harness. TT1: Omission of both the projecting clause and the projected clause TT1 shows the partial omission of the hypotactically dependent clause that qualifies the projecting clause, and therefore the relation of hypotactic enhancement is omitted in TT1. On the basis of the above three examples, we find that certain logicosemantic relations are missing because the projecting clause or the projected clause or both have been omitted during the translation process. By analysing and comparing the two translations of A Scandal in Bohemia, the following preliminary summary is obtained: different choices are made by the translators of TT1 and TT2, as seen in Table 9. In particular, addition and omission are identified in TT1: (1) When comments are added by the translators to the projected clause and projecting clause, the logico-semantic relation of extension is used; and when explanations of logical reasoning are added, the logico-semantic relation of enhancement is used. (2) Certain logico-semantic relations are missing because the projecting clause or the projected clause or both are omitted during the translation process. Translators always leave traces of themselves in their translations, whether consciously or unconsciously. That is why we say that one thousand translators will produce one thousand different Hamlets. There is no translation without a translator. Within the scope of this study, we have come across different translators and the various choices they have made when translating the text. As far as this study is concerned, the translators of the earlier translation (TT1) tend to resort to drastic changes such as omission or addition, thus rendering translations that are more acceptable to the target readers' tastes, while the translators of the later translation (TT2) prefer to maintain the structure and information of the original text. Conclusion On the basis of the above analysis and the comparisons that were made between TT1 and TT2 of the source text A Scandal in Bohemia, the following generalisations can be made: (1) Theme: From the preliminary analysis and findings of this study, it can be seen that it is more important to analyse Theme than Mood and Modality or Transitivity in verbal projection. Thematic differences are more noticeable than differences in mood or transitivity. (2) TT1: The comparisons made between TT1 and TT2 suggest that the translation process produces more variation in TT1 when compared with the ST. For instance, in terms of textual Theme, TT1 chooses more textual Themes than the ST, while TT2 is more faithful to the ST. (3) Semantic varieties: the logico-semantic analysis also highlights differences in the translation process. (4) Translators' choices: The translators made different choices in the two different historical periods. In summary, the projecting clauses of verbal clauses were analysed in sections 3.1 to 3.3, by considering their textual, interpersonal and experiential features, in an attempt to compare the linguistic features of the ST and its two Chinese translations. We examined 70 verbal clauses projecting direct speech and 12 verbal clauses projecting indirect speech. The three perspectives of THEME, MOOD and MODALITY, and TRANSITIVITY were explored by analysing the 70 verbal clauses projecting direct speech. In general, TT2 was found to be similar to the ST in terms of MOOD and MODALITY, and PROCESS, while TT1 and TT2 differed from the ST in terms of topical Themes because of Chinese linguistic features. Section 3.4 examined the logico-semantic relations between the ST and its two Chinese translations in verbal clause complexes, in order to investigate the various choices made by the translators of TT1 and TT2. For instance, circumstantial elements can be translated, but the translators chose to either ignore them or translated them in some other way. Although some similarities were shared between TT1 and TT2, notable differences were identified in terms of their linguistic features. In section 4, further examples were analysed and compared in the two Chinese translations, to identify the choices made by the translators. In comparison to the translators of TT2, the translators of TT1 were found to be more flexible, because, when compared to the source text, they incorporated more addition and omission into their translation. This is only a preliminary study of one pilot text, but some interesting findings have been revealed from the analysis of two translations from different time periods, in particular thematic differences and various logico-semantic relations. If more texts were to be collected and analysed, the findings obtained might display more variations between the two translations.
6,364.4
2020-11-12T00:00:00.000
[ "Linguistics" ]
Anomalies in orbifold field theories We study the constraints on models with extra dimensions arising from local anomaly cancellation. We consider a five-dimensional field theory with a U(1) gauge field and a charged fermion, compactified on the orbifold S^1/(Z_2 x Z_2'). We show that, even if the orbifold projections remove both fermionic zero modes, there are gauge anomalies localized at the fixed points. Anomalies naively cancel after integration over the fifth dimension, but gauge invariance is broken, spoiling the consistency of the theory. We discuss the implications for realistic supersymmetric models with a single Higgs hypermultiplet in the bulk, and possible cancellation mechanisms in non-minimal models. Introduction Theories formulated in D > 4 space-time dimensions may lead to a geometrical understanding of the problems of mass generation and symmetry breaking. Orbifold compactifications [1] of higher-dimensional theories are simple and efficient mechanisms to reduce their symmetries and to generate four-dimensional (4-D) chirality. Phenomenologically interesting orbifold models can be formulated, either as explicit string constructions or as effective higher-dimensional field theories. The field-theoretical approach to orbifolds is currently fashionable because of its apparent simplicity and flexibility. However, it is well known that the rules for the construction of consistent string-theory orbifolds are quite stringent, and automatically implement a number of consistency conditions in the corresponding effective field theories: in particular, the cancellation of gauge, gravitational and mixed anomalies. Since anomalies are infrared phenomena, if we start from a consistent string model ('top-down' approach), anomaly cancellation must find an appropriate description in the effective field theory. Such a description, however, may be non-trivial, as for the Green-Schwarz [2] or the inflow [3] mechanisms. If, instead, we decide to work directly at the field-theory level ('bottom-up' approach), great care is needed, since orbifold projections do not necessarily preserve the quantum consistency of a field theory (as discussed, for example, in [4]). In particular, the question of anomaly cancellation must be explicitly addressed. A first step in this direction was taken in ref. [5], which discussed the chiral anomaly in a five-dimensional (5-D) theory compactified on the orbifold S 1 /Z 2 . It was found that, in such a simple context, naive 4-D anomaly cancellation is sufficient to ensure 5-D anomaly cancellation. For a 5-D fermion of unit charge, and a chiral action of the Z 2 projection, the 5-D anomaly is localized at the orbifold fixed points, and is proportional to the 4-D anomaly: where 1 J M is the 5-D current and is proportional to the 4-D chiral anomaly from a charged Dirac spinor in the external gauge potential A µ (x, y). In our notation: M = [(µ = 0, 1, 2, 3), 4]; x ≡ (x 0,1,2,3 ) are the first four coordinates, y ≡ x 4 is the fifth coordinate, compactified on a circle of radius R; y = 0, πR are the two fixed points with respect to the Z 2 symmetry y → −y; g 5 is the 5-D gauge coupling constant. 1 We work on the orbifold covering space S 1 , and we normalize the δ-functions so that, for In this letter we show that the phenomenon discussed in [5] does not persist in more general cases. To be definite, we consider a 5-D field theory with a U(1) gauge field A M and a massless fermion ψ of unit charge, compactified on the orbifold S 1 /(Z 2 × Z ′ 2 ). The action of the two parities are y → −y and y ′ → −y ′ , respectively, where y ′ = y − πR/2. Both the gauge and the fermion fields are taken to be periodic on the circle. We decompose the Dirac spinor ψ into left and right spinors with parities (+, −) and (−, +), respectively: ψ ≡ ψ +− + ψ −+ . Notice that a standard fermion mass term is forbidden by the Z 2 × Z ′ 2 symmetry. As for the gauge field, we assign (+, +) parities to A µ , (−, −) to A 4 . Although the theory has no massless 4-D chiral fermion, a non-vanishing anomaly is induced, given by eq. (10) below. The theory can be trivially supersymmetrized, by embedding its field content in a U(1) vector multiplet and a charged hypermultiplet. From the point of view of anomalies, our simple example reproduces the essential features of a recently proposed phenomenological model [6], whose light spectrum contains just the states of the Standard Model (SM), with an anomaly-free fermion content. The underlying 5-D theory is supersymmetric, with vector multiplets containing the SM gauge bosons, and hypermultiplets containing the SM quarks and leptons. In addition, the model of ref. [6] has just one charged hypermultiplet, which contains the SM Higgs boson. Such a model has received some attention because it may give a prediction for the Higgs mass, even though it was recently shown [7] that the Higgs self-energy receives a quadratically divergent one-loop contribution. The latter corresponds to the appearance of a Fayet-Iliopoulos (FI) term, with divergences localized at the orbifold fixed points, which immediately hints at a possible connection with anomalies. The content of the present letter is organized as follows. We begin by showing that, even if there are no 4-D massless fermions in the spectrum, our simple 5-D theory is actually anomalous. Localized anomalies, with opposite signs, appear at the fixed points of the two orbifold projections. The integrated anomaly vanishes, reflecting the absence of any one-loop anomaly among 4-D massless states, but there are anomalous triangle diagrams when at least one of the external states is a massive Kaluza-Klein (KK) mode. We focus our attention on the U(1) 3 gauge anomaly, which we explicitly compute along the lines of [5]. In realistic extensions, such as [6], similar results would hold for the U(1) 3 Y , U(1) Y -SU(2) 2 L and U(1) Y -gravitational anomalies. We then argue that this anomaly leads to a breakdown of 4-D gauge invariance. Hence, in its minimal form, the model is inconsistent, even as an effective low-energy theory. Next, we consider the supersymmetric extension of our simple theory, and compute the precise expression for the one-loop FI term. Finally, we discuss the possible modifications that could restore the consistency of the theory. U (1) anomalies In this section, we take the theory defined in the Introduction and we compute the U(1) 3 anomaly, following closely the method and the notation of ref. [5] (for an early computation of this type, see also ref. [8]). The KK wavefunctions ξ ab for fields ϕ ab of definite Z 2 × Z ′ 2 parities (a, b = ±) are defined as: where η n is 1/ √ 2 for n = 0 and 1 for n > 0. They form a complete orthonormal basis of periodic functions on S 1 , with given Z 2 × Z ′ 2 parities. The Fourier modes of a field ϕ ab are defined as: and have a mass given by m 2n+(ab−1)/2 , where m n = n/R. In the gauge A 4 = 0, the 4-D Lagrangian for the Fourier modes ψ n ≡ ψ +− n + ψ −+ n can be written as where in terms of the U(1) connection A µ . Interpreting ψ n as a single fermion with a flavour index and chiral couplings to the gauge field A µ mn through the currents J µ ± mn =ψ m γ µ P ± ψ n , it is straightforward to adapt the standard computation of anomalies to obtain: where J 4 ± mn =ψ m iγ 5 P ± ψ n . Equation (7) can be easily Fourier-transformed back to configuration space by convolution with ξ ±∓ m (y) ξ ±∓ n (y). Using completeness, this yields 2 Notice that the A ±∓ µ mn are not Fourier modes of the type (4), but can be easily related to them. One finds: where the quantity Q was defined in eq. (2). The anomaly in the vector current hence y) . (10) Therefore, although the integrated anomaly vanishes, there are anomalies, localized at the fixed points, that are equal in magnitude to 1/4 (or 1/2 if we sum the contribution from identified fixed points) of the anomaly from a 4-D Weyl fermion. The full 5-D theory is thus inconsistent (at least in its minimal form). Let us now rewrite eq. (10) in terms of standard Fourier modes of the current and gauge fields. Recalling that both have (+, +) parities, the Fourier transform of (10) takes the form: where with g 4 = g 5 / √ 2πR. This quantity encodes the triangular anomaly between three external KK modes of the photon with indices (n, i, j), as illustrated in fig. 1. This anomaly vanishes for n + i + j = even, and in particular for n = i = j = 0, reflecting the fact that there is no 4-D anomaly for the massless modes: all nonvanishing anomalous diagrams involve at least one massive mode. These diagrams make the full theory inconsistent. However, it may be asked whether the low-energy effective theory obtained by integrating out all massive modes could be consistent. This is not the case, because gluing such diagrams through heavy lines produces 4-D gauge symmetry breaking effective interactions among zero-modes. Consider for instance a 3-loop diagram obtained by gluing two anomalous triangles through two massive photons, as depicted in fig. 2. This represents a contribution to the twopoint function Π µν of the zero-mode photon that violates gauge invariance. The non-vanishing longitudinal component of Π µν is encoded in q µ q ν Π µν (q), which feels only the anomalous part of the triangular subdiagram [9]. Another example is the four-point function involving two longitudinal and two transverse zero-mode photons, which receives a non-vanishing finite two-loop contribution controlled by the anomaly. These gauge anomalies could be computed in an independent way by using their well-known relation with chiral anomalies and index theorems, which is particularly clear in Fujikawa's approach. In this formalism, the integrated chiral anomaly of a 4-D Dirac fermion is encoded in the quantity Tr D=4 [γ 5 ] = index( / D), where / D is the Dirac operator. In the case of a 5-D theory compactified on S 1 , the anomaly vanishes, since the Hilbert space splits into two identical components of opposite chirality and Tr D=5 [γ 5 ] = 0. This can be easily extended to an S 1 /(Z 2 × Z ′ 2 ) orbifold compactification. The trace must now be restricted to invariant states only; this can be achieved by inserting into the unconstrained trace a Z 2 ×Z ′ 2 projector P . Denoting by g and g ′ the generators of Z 2 and Z ′ 2 respectively, the explicit expression of this projector is P = 1 4 (1 + g + g ′ + gg ′ ). Each element in P , when inserted in the trace, leads to a so-called equivariant index of the Dirac operator. This has a non-vanishing support only at the fixed points of the element. The identity in P gives a vanishing result as in the S 1 compactification. Similarly, the gg ′ element also gives a vanishing contribution, because it generates a translation along the compact direction that does not affect chirality. On the other hand, the elements g and g ′ act chirally on the Dirac fermion ψ (gψg −1 = γ 5 ψ, g ′ ψg ′ −1 = −γ 5 ψ) and give a non-vanishing contribution. Both have two fixed points, and the integrated anomaly is thus : where the relative sign between the contributions associated with g and g ′ is due to their opposite action on fermions. This leads to (10). Supersymmetric models The result found for the anomalies in section 2 can be trivially extended to supersymmetric configurations, where the U(1) gauge field belongs to a 5-D N = 1 vector multiplet and the Dirac fermion ψ to a charged hypermultiplet. As such, all the above considerations apply also to the model of ref. [6]. In particular, focusing on the Higgs hypermultiplet, doublet under SU(2) L with hypercharge Y = +1, we get a localized U(1) 3 Y anomaly that is twice the one in (10). The reader may wonder whether such localized anomaly has any relation with the one-loop FI term recently found in [7]. The method of the previous section can be easily extended to the computation of the full one-loop FI term. The relevant part of the Lagrangian is where φ ±± m are the modes, defined according to eqs. (3) and (4), of the two scalars in the Higgs hypermultiplet, and where D(x, y) is the third component of the triplet of N = 2 auxiliary fields. Considering again the mode indices as flavour indices, we find for the FI term: where where Λ is an ultraviolet cutoff. By Fourier-transforming back to configuration space, we can write where the exact profile of ξ(y) can be explicitly evaluated. By first summing over the KK states, the 4-D momentum integral is convergent for generic y, yielding: whereỹ = y − πR/2 l>0 θ(y − lπR/2) is the restriction of y to the interval [0, πR/2[. Equation (19) diverges asỹ −3 whenỹ tends to 0, which corresponds to y approaching one of the four fixed points y i = (i − 1)πR/2 (i = 1, 2, 3, 4). Away from the fixed points, there is only a finite bulk contribution. The divergent part of ξ(y) is easily evaluated by going back to (17) and using: Hence, the structure of the FI term is: with K(y) being a finite function. Therefore, the divergent part of the induced FI term is localized at the orbifold fixed points, as the anomaly. This is a remnant of the relation between FI terms and mixed U(1)-gravitational anomalies in supersymmetric theories. Outlook We have seen that orbifold field theories can be anomalous even in the absence of an anomalous spectrum of zero modes. It is then important to understand whether there exist anomaly cancellation mechanisms, and whether they can be consistently implemented: for definiteness, we discuss this issue by making reference once more to the case of S 1 /(Z 2 × Z ′ 2 ). One possibility would be to add localized fermions at the fixed points, analogous to the twisted sectors of string compactifications. However, as we have seen in section 2, a bulk fermion produces only half of the anomaly of a Weyl fermion at each fixed point. Therefore, this possibility may be generically cumbersome to realize without the guidance of an underlying string theory. Another possibility would be to implement an anomaly cancellation mechanism of the Green-Schwarz [2] or inflow [3] type. The former would lead to a spontaneous breaking of the U(1) gauge symmetry [10]. For the latter, we must cope with the fact that a 5-D Chern-Simons term ǫ M N OP Q A M F N O F P Q (see [11]) cannot be added to the bulk Lagrangian, because it is not invariant under the two orbifold projections. However, we can imagine more general possibilities. For example 3 , we could introduce a bosonic field χ with (−, −) periodicities and try to introduce the Chern-Simons term in combination with χ. The field χ should then dynamically get a vacuum expectation value with a non-trivial y-profile (breaking spontaneously the Z 2 × Z ′ 2 discrete symmetry), thereby generating a sort of magnetic charge for the fixed points and leaving only very massive fluctuations. The resulting Chern-Simons term could then cancel, for an appropriate value of the coefficient, the one-loop anomaly. Notice that this mechanism can work only when the integrated anomaly vanishes. It is interesting to observe that the presence and the structure of the anomalies that we have found in the S 1 /(Z 2 × Z ′ 2 ) orbifold could have been anticipated 4 by analysing the intermediate models on S 1 /Z 2 or S 1 /Z ′ 2 . Indeed, the S 1 /Z 2 and S 1 /Z ′ 2 models have anomalies given by eq. (1) and localized at the two Z 2 and Z ′ 2 fixed points respectively, but with opposite sign, reflecting the difference between the Z 2 and Z ′ 2 actions on the fermions. For supersymmetric models, all the above considerations apply, but supersymmetry poses further constraints. It seems then quite difficult to get a consistent SUSY field theory on the S 1 /(Z 2 × Z ′ 2 ) orbifold with a single bulk Higgs hypermultiplet. On the contrary, the addition of a second Higgs hypermultiplet in the bulk, as in [12], would cancel at the same time the anomaly and the one-loop-induced FI term. It therefore seems that the necessity of having two Higgs doublets in 4-D supersymmetric extensions of the SM persists also in these higher-dimensional constructions.
3,977.4
2001-10-08T00:00:00.000
[ "Physics" ]
Rheumatoid Arthritis Patients With Circulating Extracellular Vesicles Positive for IgM Rheumatoid Factor Have Higher Disease Activity Rheumatoid arthritis (RA) is an autoimmune inflammatory disease that mainly affects synovial joints. Validated laboratory parameters for RA diagnosis are higher blood levels of rheumatoid factor IgM (IgM-RF), anti-citrullinated protein autoantibodies (ACPA), C-reactive protein (CRP) levels and erythrocyte sedimentation rate (ESR). Clinical parameters used are the number of tender (TJC) and swollen joints (SJC) and the global patient visual analog score (VAS). To determine disease remission in patients a disease activity score (DAS28) can be calculated based on SJC, TJC, VAS, and ESR (or alternatively CRP). However, subtle and better predictive changes to follow treatment responses in individual patients cannot be measured by the above mentioned parameters nor by measuring cytokine levels in blood. As extracellular vesicles (EVs) play a role in intercellular communication and carry a multitude of signals we set out to determine their value as a biomarker for disease activity. EVs were isolated from platelet-free plasma of 41 RA patients and 24 healthy controls (HC) by size exclusion chromatography (SEC). We quantified the particle and protein concentration, using NanoSight particle tracking analysis and micro-BCA, respectively, and observed no differences between RA patients and HC. In plasma of 28 out of 41 RA patients IgM-RF was detectable by ELISA, and in 13 out of these 28 seropositive RA patients (RF+RA) IgM-RF was also detected on their isolated pEVs (IgM-RF+). In seronegative RA patients (RF−RA) we did not find any RF present on pEVs. When comparing disease parameters we found no differences between RF+RA and RF−RA patients, except for increased ESR levels in RF+RA patients. However, RF+RA patients with IgM-RF+ pEVs showed significantly higher levels of CRP and ESR and also VAS and DAS28 were significantly increased compared to RA+ patients without IgM-RF+ pEVs. This study shows for the first time the presence of IgM-RF on pEVs in a proportion of RF+RA patients with a higher disease activity. INTRODUCTION Rheumatoid arthritis is an autoimmune disease in which the body's immune system mistakenly attacks the synovial joints leading to joint inflammation as a hallmark feature of this disease (1). The diagnosis of RA relies on physical examination and laboratory blood testing. Rheumatoid factor (RF) is found in 50% of early RA patients rising to 90% at advanced stages of disease (2). Seropositive RA patients (RF + RA) have detectable RF in their circulation and it has been shown that high RF levels are predictors of more severe disease forms (3). The pathogenesis of RA may be different between RF + RA and RF − RA patients with the later often reported as less severe although studies are conflicting (4). RF contribute to the disease process of RA by a mechanism where large immune complexes are formed and complement activation is induced (5). Cardiovascular disease and organ involvement such as the lungs, heart, and eyes are wellrecognized complications occurring primarily in RF + RA patients (6). To determine the disease activity in all RA patients a formula has been developed (DAS28) by our department that includes the physical examination of 28 joints for tenderness and swelling, the erythrocyte sedimentation rate (or C-reactive protein levels instead) and a patient global visual analog score (of pain or global health) (7). DAS28 is a good tool to define remission in established RA but has limited value in monitoring disease processes and responses. It is well-recognized that numerous cytokines play an important role in the local and systemic inflammatory response in RA and although targeting these cytokines using biological is therapeutically successful in many RA patients their usefulness as biomarkers to monitor disease activity and treatment response is poor, even a combination of 12 cytokines in the multi-biomarker disease activity (MBDA) score has limited predictive value (8). Until now, most research on rheumatoid arthritis has focused on cytokines as main effectors in disease progression however, cell-cell communication involves a much broader scope of responses including proliferation, apoptosis and migration. In terms of communication a cell can release additionally extracellular vesicles (EVs) and recently an important role of these EVs has been postulated as important communicators between resident and inflammatory cells (9). EVs play a regulatory role in immunity during health and disease (10,11) and are suggested to be involved in auto-immune disease (12). They are released from cells and are detectable in body fluids such as blood, synovial fluid, urine and breast milk (13). They contain numerous proteins (including cytokines), lipids (prostaglandins), RNA, DNA, and sometimes even cell organelles such as mitochondria (14). Three subtypes of EVs have been described: apoptotic bodies (1-10 µm), microvesicles (100-1000 nm), and exosomes (30-100 nm), which are classified based on their origin, programmed cell disintegration, plasma membrane outward budding, and release via multivesicular bodies, respectively (15). Interestingly, specific membrane proteins are enriched in exosomes in a celltype dependent fashion (13,16). Additionally, preliminary data suggest that exosome content differs between individual people. This enrichment of specific proteins in exosomes compared to those that are found in the cytoplasm of the donor cell could mean these vesicles play some distinctive function (16). It is known that B-cells also release EVs that contain the B-cell receptor on their surface which is an immunoglobulin that binds and presents antigens to T-cells (17). The presence of IgM-RF in RF + RA patients clearly points to a role of B-cells and plasma cells in the pathogenesis of this disease (18). A B-cell depleting strategy by a monoclonal anti-CD20 antibody (Rituximab) is a highly effective therapy in RA patients (19). Therefore, we investigated the presence of IgM-RF on EVs and its relation to the activity of RA determined by laboratory and clinical parameters. Blood Donors Blood was obtained from 41 RA patients fulfilling ACR/EULAR classification (20), of which 28 patients had a IgM-RF titer > 10 IU/ml (RF + RA). Tender (TJC) and swollen joints (SJC) were assessed by physicians and the global patient visual analog score (VAS) was determined by the patient, where a point was set on a 100 mm line as a reflexion of their disease feeling after which the distance to the point was measured (0 = no disease, 100 = worst disease feeling). ESR and CRP levels were determined by standard laboratory blood tests in our hospital. Disease activity score (DAS28) was calculated based on SJC, TJC, VAS, and ESR. An overview of the rheumatic arthritis disease status and medication of RF + RA and RF − RA patients are shown in Table 1. As age matched controls, blood from healthy controls (HC) were obtained from the blood transfusion department (Sanquin, Nijmegen, The Netherlands). All donors provided informed consent under institutional ethics committee approved protocols. Blood Sample Preparation Blood samples were taken in ethylenediaminetetraacetic acid tubes (BD, Plymouth, UK) and within 1 h centrifuged for 10 min at 1690 g by 4 • C to obtain plasma. Plasma was centrifuged at 10,000 g for 30 min by 4 • C to obtain platelet free plasma (pfp). Thereafter the supernatant was passed through a 0.22 µm filter (Whatmann) and aliquoted. Aliquots were stored at −80 • C until EV isolation. Plasma EV Isolation pEVs were isolated by size exclusion chromatography (SEC) using the protocol described by Lobb et al. (21). In short, a sterile column was prepared for SEC using a 10 ml syringe stacked with sepharose CL-2B (Pharmacia, Uppsala, Sweden). After washing the column with phosphate buffered saline (PBS) containing 0.32% citrate (pH 7.4, autoclaved), 0.5 ml of pfp was loaded and eluted using PBS/0.32% citrate buffer. 1 ml eluate fractions were collected and fraction 5 containing the EVs was stored at 4 • C for further use. Protein Measurement The amount of protein was measured using a Micro BCA Protein assay kit following the protocol provided by the manufacturer (Thermoscientific, Rockford, USA). pEV samples were diluted 10, 20, 40, and 80 times in NaCl 0.9% and after 2 h incubation at 37 • C absorbance was measured using the BioRad iMark microplate reader. Nanoparticle Tracking Analysis Vesicle size distribution was estimated by the Brownian motion of particles using a NanoSight NS300 (Sysmex, Etten-Leur, The Netherlands) with Nanoparticle Tracking Analysis 3.2 software (NanoSight, Amesbury, UK). Vesicles were diluted in PBS, till an optimal concentration for reliable analysis was reached (20-80 particles per frame). Each sample was measured for 60 s (in duplicates), using the following software settings: flow rate 50, camera level 10 and detection threshold 5. Transmission Electron Microscopy Fraction 5 of SEC was washed using Vivaspin-2 columns (Sartorius Stedim Biotech GmbH, Goettingen, Germany) to remove salts from the solution and 3 µl EVs in deionized water was placed on a nickel grid and allowed to dry to air for 45 min. The grids were then washed by transferring them onto several drops of deionized water. Negative contrast staining was performed by incubating the grids on top of drops of 6% uranyl acetate. Excess fluid was removed and the grids were allowed to dry before examination in a JEM1200 transmission electron microscope (Jeol, The Netherlands). Sucrose Gradient The density of isolated pEVs was determined by sucrose gradient described by Chiou (22). In short, 100 µl SEC isolated pEVs was mixed with 1 ml 70% sucrose (Sigma). Thereafter, a sucrose gradient was layered on top of the 70% in a centrifuge tube (14 × 19 mm; Beckman Instruments Inc. USA) and spun for 18 h in a SW 40 Ti rotor (100,000 g at 4 • C). After centrifugation, 1 ml fractions were taken and washed with PBS. Finally, EVs were pelleted at 166,000 g for 90 min and were taken up in 100 ul PBS. Particle concentration was measured by NTA. IgM-RF Detection IgM-RF was detected by ELISA. In short, in a 96 wells plate aggregated human IgG (Sigma) was coated. After washing, pEVs were added and incubated for 90 min. Wells were washed twice, thereafter HRP-labeled Goat-anti-IgM F(ab') 2 fragment (BioMP) was added and incubated for 90 min. Following a third washing step TMB was added to bind HRP. By addition of acid, the enzyme reaction was stopped and plate was measured at 450 nm. IgM levels were calculated using a standard curve. RA patients with an IgM-RF titer > 10 IU/ml in plasma were scored as RF + at the clinical lab. For pEVs IgM-RF levels > 2 IU/ml were regarded as IgM-RF + pEVs in our study. Detection of Labeled pEVs Bound to Protein L Beads Based on the protocol previously described (23) pEVs were diluted in PBS before addition of 300 µl of Diluent C and 1 µl PKH-26 (Sigma). The samples were gently mixed for 2 min before adding 500 µl 1% BSA to stop the membrane staining and were loaded onto 300 kDa Vivaspin filters (Sartorius Stedim Biotech GmbH, Goettingen, Germany). After centrifuged at 4,000 g samples were washed three times with 2 ml PBS and taken up in 500 µl PBS. Labeled pEVs were bound to immobilized Protein L magnetic beads (Thermoscientific, Rockford, USA). After short incubation in a magnet the unbound pEVs were collected. After washing three times with PBS + 0.1%BSA pEVs bound to Protein L magnetic beads were collected. Fluorescence of Protein-L bound and unbound pEVs was measured on a fluorometer (Clariostar, BMG). Detection of IgG Binding to pEVs IgG binding to IgM-RF + pEVs and protein-L unbound IgM-RF + pEVs was measured by bead-assisted flowcytometry (FC). pEVs were coupled to antiCD63 magnetic beads (Thermo Fisher Scientific) for 3 h at RT. Unbound pEVs were removed by 3 washing steps. Thereafter CD63 + pEVs were incubated with IgG-PE (eBioscience) for 1 h and after 3 washing steps coupled IgG-PE was measured by FC. Statistical Analysis All data are expressed as mean ± SD. Data were compared using two tailed Mann-Whitney U-test. Values of P < 0.05 were considered to indicate statistical significance. Correlations were represented by the Pearson correlation coefficient (r) and their p-value. All statistical analyses were performed using GraphPad Prism 5.01 (GraphPad Software, La Jolla, CA). Characterization of pEVs To separate EVs from plasma proteins SEC isolation was used as described before (21). Collection of eluent showed that fraction 5 displayed the highest particle concentration (detected by NTA) while in fractions 6 and above high protein levels were detected (measured by BCA measurement) ( Figure 1A). The particle size of SEC eluted fraction 5 was around 100 nm as determined by NTA (Figure 1B), in line with the observed size visualized by TEM ( Figure 1C). For all further studies SEC eluent fraction 5 was used. The main density of the isolated pEVs was 1.17 g/ml (sucrose density) (Figure 1D), which confirmed the presence of exosomes. Detecting RF on pEVs in RA Patients To investigate whether IgM-RF, as detected in plasma of RF + RA patients by ELISA, could also be detected in fraction 5 of the SEC isolated pEVs, IgM-RF ELISA was performed and in Frontiers in Immunology | www.frontiersin.org FIGURE 2 | Characterization of pEVs from healthy controls and RA patients. Mode particle size (A), protein content per particle (B), and amount of particles (C) from pEVs isolated from 500 µl pfp of HC (n = 24) or RA patients (n = 41) were measured by NTA and micro-BCA protein assay. Next, RA patients were divided in 2 groups based on the presence of IgM-RF (RF + RA; n = 28 and RF − RA; n = 13). Particle size (D), protein per particle (E) and amount of particles (F) of isolated pEVs are shown measured by NTA and micro-BCA protein assay. 46% (13 out of 28 RF + RA patients) IgM-RF was detectable ( Figure 4A) while no IgM-RF was found in SEC eluent fraction 5 of RF − RA patients (data not shown). IgM-RF levels in pfp were significantly enhanced in RF + RA patients with IgM-RF + pEVs, although a high variability was observed in this group ( Figure 4B). This high variability and the result that levels of IgM-RF as determined in SEC fraction 5 did not correlate with the plasma levels of IgM-RF ( Figure 4C) excludes the coisolation of plasma RF protein in SEC fraction 5. No differences in particle size, particle concentration and protein concentration per particle were observed between RF + RA patients with or without IgM-RF on their pEVs (Figures 4D-F respectively). Confirming the Presence of IgM-RF on pEVs RF are autoantibodies directed against the Fc-tail of immunoglobulin G. To study the IgG binding to IgM-RF + pEVs, we coupled pEVs isolated from 8 RF + RA patients to anti-CD63 beads, incubated them with Phycoerythrin (PE)-conjugated IgG and analyzed IgG binding using FC. IgG-PE was bound to pEVs in the same RF + RA patients in which IgM-RF was detected on EVs by ELISA (Figure 5A). A representative FC plot of an RA patient with IgM-RF + pEVs is shown (Figure 5B). To investigate the amount of IgM-RF + pEVs in the total isolated pEVs, pEVs were labeled with the fluorescent membrane staining PKH26, thereafter incubated with protein-L beads. Fluorescence was measured in the protein-L bound-and unbound fractions. On average 3,6% more fluorescence signal was detected in the protein-L bound fraction of the IgM-RF + pEVs compared with the IgM-RF − pEVs ( Figure 5C). Detection of IgG binding to protein-L unbound pEVs showed a decreased PE signal indicating the presence of IgM-RF on the pEVs (Figure 5D). These two different approaches confirmed that the IgM-RF + particles are actually EVs because they have a lipid bilayer and carry the exosome marker CD63. RF + RA Patients With IgM-RF Containing pEVs Showed More Severe Disease An overview of clinical data obtained from RF + RA patients stratified in two groups based on presence of IgM-RF on their pEVs is shown in Table 2. As IgM-RF and EVs are thought to be implicated in the disease process, we divided the RF + RA patients in 2 subgroups based on the presence or absence of IgM-RF on their pEVs (IgM-RF + pEVs and IgM-RF − pEVs). After analyzing the coupling of the obtained clinical parameters to these two subgroups we observed no differences on TJC and SJC (Figures 6A,B). However DAS28, VAS disease activity, CRP and ESR levels were significantly enhanced in patients with IgM-RF + pEVs (Figures 6C-F, respectively), suggesting the presence of IgM-RF on pEVs to be a novel biomarker for disease activity in RA patients. DISCUSSION In this study plasma EVs isolated from healthy controls an RA patients did not show different characteristics in all parameters studied, however in seropositive RA patients we have identified a subgroup expressing IgM-RF on their pEVs and these patients showing significantly higher disease activity. We set out to study the putative differences between EVs from RA patients and healthy controls as well as between serological different RA patient groups. Others found that the number of circulating EVs is enhanced in RA patients (24) but we could not confirm this result in our study. The total particle concentration of pEVs obtained was not different between RA patients and HC. We also showed that the protein content and particle size of pEVs isolated from RA patients were comparable to HC FIGURE 4 | IgM-RF detection in fraction 5 of SEC separated platelet free plasma. IgM-RF levels on pEVs isolated from all 28 RF + RA patients were measured by ELISA and percentage of RF + RA patients with or without IgM-RF + pEVs are shown (A). From these 2 subdivided groups IgM-RF blood levels are shown individually per patient (B). Correlation of IgM-RF levels on pEVs and levels in platelet free plasma (pfp) of RF + RA patients with IgM-RF + pEVs is shown (C). Pearson correlation coefficient (r) and their p-value is shown. Mode particle size (D), protein content per particle (E) and amount of particles (F) from pEVs of these subdivided groups (RF + RA patients with IgM-RF + pEVs or IgM-RF − pEVs) are shown, measured by NTA and micro-BCA protein assay. Statistically significant differences were determined by Mann-Whitney test, **p < 0.01. although others reported that the protein content of EVs from patients with RA is altered (25). The coagulation cascade in serum can lead to massive EV release by thrombocytes as well as by storage at −80 • C (26) and the observed differences could be explained by the fact that platelet counts are increased in RA patients (27). Our study shows that IgM-RF is present in the pEVs isolated after SEC separation in a subset of RF + RA patients, and not in RF − RA patients nor healthy controls. The fact that IgM-RF levels in the SEC isolated pEVs did not correlate to levels detected in plasma makes it plausible that IgM-RF is bound to pEVs. Furthermore, the presence of a lipid bilayer (PKH26 staining) and binding to anti-CD63 confirmed that these IgM-RF particles makes them true extracellular vesicles. ESR was enhanced in RF + RA patients compared to RF − RA patients while other clinical parameters were not statistically different. In this study we found that RF + RA patients with detectable IgM-RF on their pEVs showed significantly higher ESR compared to RF + RA patients without IgM-RF on their pEVs and also CRP levels were statistically enhanced. CRP is a sensitive index for RA disease activity and changes in CRP can predict treatment response of patients (28). Interestingly, the objective joint scores (TJC and SJC), which are considered the most robust reflector of disease activity, did not differ between these subgroups while the VAS (global assessment of disease FIGURE 5 | Antibody (IgG) binding to IgM-RF containing pEVs. SEC fraction 5 of 8 RA patients were incubated to anti-CD63 beads to specifically immobilize the EVs to the beads. Thereafter, the beads were incubated with human IgG-PE and analyzed by FC. IgG binding to pEVs coupled to anti-CD63 magnetic beads was only found in the RF + RA patients with IgM-RF + pEVs (A). A representative FC plot is shown of a RF + RA patients with IgM-RF + pEVs (black) and RF + RA patients with IgM-RF − pEVs (gray) (B). (A,B) showed a IgG binding factor on pEVs. Next pEVs were stained by PKH26 and incubated with protein-L beads. The level of bound and unbound PKH26-stained pEVs was measured by fluorometer (C). Reduced IgG-PE staining was observed on Protein L unbound IgM-RF + pEVs bound to CD63 magnetic beads (black) as measured by FC (D). Statistically significant differences were determined by Mann-Whitney test, *p < 0.05. , CRP and ESR levels (E,F) of these 2 subgroups are shown. Statistically significant differences were determined by Mann-Whitney test, *p < 0.05, **p < 0.01 and ***p < 0.001. activity) was clearly increased. The VAS is a more subjective index of patients general health of which pain and probably other symptoms and manifestations of RA are reflected. The disparity between global assessment of disease activity and the number of tender and swollen joints is striking, however appears to be in line with clinical literature in which (changes in) global VAS are poorly explained (29 (32). It is plausible that IgM-RF is bound to IgG present on pEVs or that it recognizes IgG autoantibodies that bind to antigens on EVs. Another possibility is that the IgM-RF + pEVs originate from autoreactive B-cells and as B-cells play an important role in RA (18) there could be a of changes in BCR repertoire, or enhanced B-cell activation in this subgroup of RA patients with a more active disease. In that case the IgM-RF + pEVs could reflect changes in pre-Bcell immunity and disease activity more rapidly than changes in circulating levels of "free" IgM-RF, which reflects plasma activity. The reported short circulation half-life of EVs in hours could make the IgM-RF + pEVs more responsive to changes in disease activity than free IgM-RF having a half-life of 5 days in the circulation (33). For future study, it would be interesting to measure the effect of the B-cell depletion therapy by Rituximab on IgM-RF + pEVs and investigate whether changes in IgM-RF + pEVs levels reflect the drop in B-cells and the clinical response to this treatment. In that case, measuring IgM-RF + pEVs levels might be a way to determine the efficacy of B-cell depletion therapy. Furthermore, longitudinal studies may reveal whether we are dealing with a unique subpopulation of RA patients or that the IgM-RF + pEVs merely reflects the disease state at the moment of blood sampling. The IgM-RF + pEVs levels could be a reflection of a combination of inflammation and autoreactive B-cells activation. It is plausible that Fc-receptors are activated by immunecomplexes formed by IgM-RF + pEVs. Therefore, further research is needed to investigate the added value of the presence of EVs in the immune-complex induced Fc-receptor signaling. To study their functional effect on immune cells requires the use of pure IgM-RF + pEVs and a recent described technique would be suitable to separate IgM-RF + pEVs from the total pEVs population isolated from RA patients by advanced imaging flow cytometry (34). In conclusion, the present study shows for the first time that in a subset of seropositive RA patients IgM-RF is present on pEVs and this is related to higher disease activity. Our discovery sheds new light on the disparity between global assessment of disease activity and tender and swollen joints seemingly uncovering a potential biological factor in the "subjective" measure of global disease activity. AUTHOR CONTRIBUTIONS OA, BP, RT, and FvdL participated in the design of the study. OA and BP contributed in the experimental methods. OA and BP performed data analysis. OA, BP, FvdL wrote the manuscript. OA, BP, RT, MW, PvL, PvdK, MK, FvdH, and FvdL contributed to discussions and approved the manuscript. FUNDING The work was financially supported by the Dutch Arthritis Foundation (grant number 15-2-403) and ZonMW program more research with less animals (114021001).
5,560.8
2018-10-29T00:00:00.000
[ "Medicine", "Biology" ]
Iron Snow in the Martian Core? The decline of Mars' global magnetic field some 3.8-4.1 billion years ago is thought to reflect the demise of the dynamo that operated in its liquid core. The dynamo was probably powered by planetary cooling and so its termination is intimately tied to the thermochemical evolution and present-day physical state of the Martian core. Bottom-up growth of a solid inner core, the crystallization regime for Earth's core, has been found to produce a long-lived dynamo leading to the suggestion that the Martian core remains entirely liquid to this day. Motivated by the experimentally-determined increase in the Fe-S liquidus temperature with decreasing pressure at Martian core conditions, we investigate whether Mars' core could crystallize from the top down. We focus on the"iron snow"regime, where newly-formed solid consists of pure Fe and is therefore heavier than the liquid. We derive global energy and entropy equations that describe the long-timescale thermal and magnetic history of the core from a general theory for two-phase, two-component liquid mixtures, assuming that the snow zone is in phase equilibrium and that all solid falls out of the layer and remelts at each timestep. Formation of snow zones occurs for a wide range of interior and thermal properties and depends critically on the initial sulfur concentration, x0. Release of gravitational energy and latent heat during growth of the snow zone do not generate sufficient entropy to restart the dynamo unless the snow zone occupies at least 400 km of the core. Snow zones can be 1.5-2 Gyrs old, though thermal stratification of the uppermost core, not included in our model, likely delays onset. Models that match the available magnetic and geodetic constraints have x0~10% and snow zones that occupy approximately the top 100 km of the present-day Martian core. Introduction Low-altitude vector magnetometer measurements from Mars Global Surveyor show that Mars presently lacks a global dipole field, but reveal large regions of strongly magnetized crust located mainly in the southern highlands (Acuña et al. 1998). The prevailing view is that this magnetization was acquired as the rock cooled in the presence of a global magnetic field (Stevenson, 2001;Breuer and Moore, 2015). The global field was likely produced in the liquid core by a dynamo process in which thermal (and possibly chemical) buoyancy forces drive convective motion (Stevenson, 2001). Inferences based on the age of impact craters (Acuña et al., 1998;Langlais et al., 2012) and Martian meteorites (Weiss et al., 2002) suggest that the global field decayed around 3.8-4.1 Ga. This event marks the demise of the Martian dynamo and may have been contemporaneous with changes in the planets' heat loss (Ruiz, 2014) and oxidation state (Tuff et al., 2013). Explanations of Mars' magnetic history are intimately linked to the thermal evolution and crystallization regime of its metallic core. A thermal dynamo can operate in an entirely liquid core, provided that the ancient core-mantle boundary (CMB) heat flow exceeded the heat lost by conduction down the adiabatic temperature gradient (assuming no radiogenic heating). In this scenario the core cooled, perhaps from an initially superheated state compared to the mantle (Williams and Nimmo 2004) or modulated by an early episode of plate tectonics (Nimmo and Stevenson, 2000), until fell below . Impact-induced thermal insulation of the core (Monteux et al., 2013;Arkani-Hamed and Olson, 2010) would produce a similar outcome. On the other hand, a thermochemical dynamo can operate with < . It has been suggested that rapid growth of an inner core early in Mars' history led to dynamo termination when the size of the liquid region fell below a critical threshold (Stevenson, 2001). This scenario has not been favored because inner core growth provides additional power sources that lead to a long-lived dynamo (Williams and Nimmo, 2004). In this paper we investigate a third scenario: the topdown crystallization of the Martian core. A necessary condition for top-down core freezing is / < / for all pressures , where is the liquidus temperature of the core alloy and is the ambient temperature. / is positive and for an adiabatic core / ∝ ∝ , where is the CMB temperature. Melting curves for iron-sulfur systems, the mixture used throughout this work, have been extensively studied (Kamada et al., (2012); Morard et al., (2011); Figure 2d). Of particular interest are the results of Stewart et al., (2007) who found that / <0 at the − conditions of Mars' core using Fe-S mixtures with 10.6 % and 14.2 % S, which assures top-down cooling over bottom-up cooling. However, application of top-down crystallization to Mars depends critically on whether its core has cooled sufficiently over the last 4.5 billion years for to fall below . A further issue is that additional power sources accompanying topdown crystallization could have provided sufficient entropy to restart the dynamo. Here we build a parameterized model of top-down crystallization in the Martian core. We consider the so-called "iron snow" regime that arises when the bulk sulfur concentration is smaller than the sulfur concentration of the eutectic composition: solid produced on freezing is heavier than the residual liquid and iron "snows" down onto the underlying liquid (Hauck et al., 2006;Dumberry and Rivoldini, 2015;Rückriemen et al., 2015). We follow the premise of previous work (Hauck et al., 2006;Rückriemen et al., 2015) and assume that crystallization in the snow zone produces a slurry: solid particles are suspended in a liquid Fe-S mixture and the solid fraction remains low enough that the system behaves as a liquid. The fluid dynamical behavior of a binary slurry is fundamentally different from that of a binary liquid mixture and so the theory must be developed from scratch, starting with the fundamental conservation equations. We derive energy and entropy equations from an established slurry theory (Loper and Roberts 1977) that does not appear to have been utilized in previous models of iron snow in planetary cores. Our model assumes that the snow layer is always in phase equilibrium and that freezing produces iron solid that quickly falls to the deeper liquid core (see Sections 2 and 4 for detailed discussion of the modelling approximations). Starting from an equilibrium state with the entire region at the liquidus temperature, cooling reduces below leading to the formation of solid iron ( Figure 1). The local increase in releases latent heat and elevates the sulfur concentration in the coexisting liquid phase, which in turn depresses until it reaches ( < implies the layer is fully liquid). Assuming no net mass exchange between core and mantle on the timescales of interest, the light residual liquid rises, producing a stable chemical stratification across the snow zone (Dumberry and Rivoldini, 2015;Rückriemen et al., 2015). The heavy solid sinks out of the snow layer into the underlying liquid region where it remelts, absorbing latent heat and causing a decrease in sulfur concentration and an increase in . Gravitational energy is liberated in the snow zone due to iron sinking and also in the liquid due to stirring induced by dense iron remelting (Rückriemen et al., 2015; Figure 1). These additional heat sources, together with variations in composition and temperature across the snow zone induced by freezing, influence the core cooling rate and the power available to generate a magnetic field. The complexity of the iron snow equations together with uncertainties in thermal and material properties of Fe-S alloys at high − conditions mean that we do not expect (or attempt) to obtain a definitive thermal history for Mars. Rather, we seek to understand the conditions that could have led to snow zone formation. Nevertheless, viable models must be compatible with the magnetic history of Mars and with geodetic observations, which suggest that at least part of the Martian core is liquid at the present day (Yoder et al., 2003). Model and Methods We generalize an existing 1D thermochemical evolution model (Davies, 2015) to study crystallization of an iron-sulfur alloy (Taylor, 2013) in the Martian core. A standard averaging procedure (Nimmo, 2015) is used to obtained the equations governing changes in the reference or equilibrium state, in which variables depend only on radius . In regions where there is no solid and outside thin boundary layers it is assumed that vigorous convection maintains a reference state where the pressure is determined by a hydrostatic balance, the sulfur concentration is uniform and the radial entropy gradient is zero (Braginsky and Roberts 1995). These conditions imply that temperature follows an adiabatic profile. The global energy budget determines the core evolution by balancing against the sum of the heat sources within the core as defined below. The energy balance does not contain information about the dynamo because magnetic energy is converted to heat within the core. The entropy balance contains the irreversible processes of thermal, chemical, mechanical and Ohmic dissipation. Together, these equations describe the thermal and magnetic history of the core. The general slurry theory describes the time-dependence of particle composition and local departures from phase equilibrium (Loper and Roberts, 1977) and must be simplified for application to planetary cores. We therefore adopt the following two approximations espoused by Loper and Roberts (1977): 1) No light element partitions into the solid phase on freezing; 2) "fast melting", i.e. instantaneous relaxation to phase equilibrium. The first approximation is supported by experiments that reported very low sulfur concentrations in the solid phase (Kamada et al., 2012;Li et al., 2001) and has also been used to model iron snow in Ganymede's core (Rückriemen et al., 2015). The second approximation means that material in an infinitesimal volume of the continuum will melt/freeze instantly. In an equilibrium snow zone, the entire system is on the liquidus and the liquidus is collinear with the adiabat. Heavy sulfur-depleted solid sinks and eventually falls out of the layer where it remelts because the temperature of the underlying liquid is above the liquidus. We assume, as in Rückriemen et al. (2015), that the timescale for sinking and remelting is much faster than the timescale for changes to the equilibrium state. At each timestep, all of the newly created solid sinks out of the layer and remelts, leaving the layer on the liquidus. We refer to this third approximation as "fast remelting". The temperature profile may not be adiabatic throughout the Martian core because compositional and/or thermal stratification can develop below the CMB. Consider first the case where > , i.e. the temperature profile is everywhere unstably stratified. Subsequent growth of a snow zone will produce a stable compositional stratification below the CMB. In this case it can be shown using equation (8) below that the isentropic condition requires where is the specific heat capacity, ̅ > 0 is the heat of reaction coefficient defined below, and the fast remelting approximation and ≪ 1 have been used to remove the contribution from radial variation in solid fraction. The first term on the right-hand side is the usual definition of the adiabatic temperature in a homogeneous fluid. The second term increases | / | since / > 0 and shows that there must be a greater variation in temperature in the presence of a stabilizing compositional gradient in order to keep the layer isentropic. Accounting for this second term is complicated because / is determined from the liquidus, which is itself related to / (see below). Instead of undertaking a complex iterative procedure, which seems unnecessary in light of significant uncertainties in several of the model parameters, we ignore variations in in the adiabat. A posteriori estimates (Section 4) reveal that this is a good approximation. The configuration of unstable thermal stratification and stable compositional stratification is potentially susceptible to oscillatory double-diffusive instabilities since heat and mass have different diffusion coefficients (Turner, 1973). In the standard doubly-diffusive configuration the horizontally averaged temperature is not expected to deviate significantly from the original adiabatic profile in this case (Buffett and Seagle, 2010) and any effect on terms in the global energy and entropy budgets should be minor. These results may not apply to an equilibrium slurry where Fickian diffusion no longer holds (Loper and Roberts, 1977); however, in the absence of theoretical or experimental evidence to the contrary we assume that doubly diffusive effects do not influence the adiabatic profile. If < a region at the top of the core will become stable to thermal convection. The base of the thermally stable layer is located where the stabilizing thermal buoyancy balances the destabilizing buoyancy forces that drive convection in the deeper core (Lister and Buffett, 1998). Departures from an adiabat are expected to be small because the thermal diffusion time, = 2 / where is the thickness of the thermally stable layer and ≈ 10 −6 is the thermal diffusivity, is around 10 7 yrs even for layers as thin as 10 km, and should not affect estimates of terms in the global equations significantly. Thermal history calculations for the Earth's core with and without a thermally stratified layer showed only minor differences to the global energy balance (Labrosse et al., 1997), which were caused primarily by the assumption that gravitational energy release occurs only in the unstably stratified region rather than by departures from adiabaticity. A more important effect arises because the inability of mantle convection to evacuate all of the heat brought to the CMB by core convection requires that the top of the core must heat up. Since snow zones form when < at the CMB, formation will be delayed when thermal stratification is present compared to when it is absent. Unfortunately, a thermally stable layer will, in general, not grow at the same rate as a snow layer; creating a parameterization for the dynamics and couplings between regions of different thermal and compositional stability significantly complicates the model and obscures the effects associated with the slurry that we wish to investigate. Our model considers an entirely adiabatic core and will therefore predict a lower and earlier snow zone nucleation than would be obtained if thermal stratification were taken into account; we return to consider the impact of thermal stratification when applying the results to Mars. The main assumptions used to develop a quantitative model for the equilibrium evolution of the snow zone are: 1) All sulfur remains in the liquid phase on freezing. 3) Fast remelting of sinking solid, i.e. rapid sinking and remelting of solid iron. 4) An adiabatic temperature profile exists throughout the core. Using approximations (1) and (2) the general thermal energy equation in a slurry can be written (Loper and Roberts, 1977) = −∇ ⋅ + ∇ ⋅ + , where the density , temperature , entropy , and chemical potential of the liquid are all functions of radius and / denotes the material derivative. The heat flux vector and mass flux vector are determined by constitutive relations. The total dissipation is assumed to arise solely from Ohmic heating, where is the electric current density and is the electrical conductivity, since the viscous dissipation is expected to be small in planetary cores (Nimmo, 2015). Radiogenic heating contributes little entropy (Williams and Nimmo, 2004) and is not considered here. The global energy equation for a slurry is obtained by summing the internal, mechanical and magnetic energies and integrating over the volume of the slurry. These equations are supplemented by the equations describing conservation of total mass and light element, , which can be written (Loper and Roberts, 1977) = − ∇ ⋅ Changes in the total internal energy can be expressed using the equation With the approximations above the total mechanical energy budget for a slurry is where is the gravitational potential, which is calculated locally as described in Davies (2015). The total magnetic energy budget is the same as for a two-component liquid: where is the Lorentz force. The rate of change of kinetic and magnetic energy, / and / respectively, are small and can be neglected (Nimmo, 2015) from equations (5) and (6). Adding the integral of equation (1) and (4)- (6) gives Equation (3) has been used to obtain the second term on the left-hand side and is the (outwardpointing) area element on the surfaces that bound . The first term in (7) can be rewritten using the entropy differential [equation 5.9 of Loper and Roberts (1977)], where is the latent heat, = −1/ ( / ) is the thermal expansion coefficient and ̅ = −( / ) is the heat of reaction coefficient. Equation (8) is identical to the entropy differential for a binary liquid mixture except for the last term, which represents changes in entropy produced by latent heat release (absorption) when solid forms (melts). The total energy budget for the whole core is obtained by applying equation (7) to the liquid and slurry regions and applying boundary conditions at the interface and CMB. We denote using superscripts and quantities on the snow and liquid side of the interface respectively. The constitutive relation for in a binary slurry is (Loper and Roberts, 1977;Loper and Roberts, 1980) where is the flux of solid particles and is the thermal conductivity. Note that = = 0 outside the slurry. At the CMB, we assume for simplicity that there is no net mass exchange; where the unit vector points radially outward. To determine the boundary condition at we follow standard pill-box arguments (Loper and Roberts, 1987), obtaining where is the velocity of the interface and 〈 〉 denotes the jump in the quantity across . The terms on the right-hand side represent respectively the latent heat and heat of reaction in the shell of freezing material. Since the core temperature is assumed adiabatic the cooling rate at radius can be related to the CMB cooling rate (Gubbins et al., 2003) = . (13) Assuming that the interface moves in the radial direction then ⋅ = − / . The latent heat of melting is defined as = Δ , where Δ is the entropy change on melting, and hence / can be related to the core cooling rate in a manner analogous to the situation of inner core growth: The gravitational energy released due to rearrangement of light material can be re-expressed using the identity ⋅ ∇ = ∇ ⋅ − ∇ ⋅ and taking the part of the density change due to composition. We separate the contributions to from the freezing out of solid in the snow zone, denoted , and remelting of solid in the liquid region, , as where = −1/ ( / ) is the compositional expansion coefficient for sulfur, assumed constant, and the second term on the right-hand side of (17) gives the contribution due to motion of the interface. The sulfur concentration in the liquid region below the snow layer is obtained by applying equation (3) to the snow zone and liquid layer and adding: Applying a standard pill-box analysis (Loper and Roberts, 1987), the boundary condition at can be written where we have used the fact that the total mass of sulfur is conserved. The time-averaging process removes the ⋅ ∇ part of the first two terms in (18). Furthermore, assuming that the liquid region is well-mixed allows / to be taken outside the integral. The second term then becomes where is the mass of the liquid region. Equation (18) can then be written The second term on the right-hand of (20) is very small because is continuous across the interface and so ≈ ( and are to be evaluated on either side of the interface). / is positive because , and hence , decrease with time: more light element is needed to keep the layer at the liquidus. Therefore, as expected, decreases with time as the liquid region becomes more enriched in iron. We obtain / from the liquidus relation where Δ is the change in volume on freezing and ̅ = / . ̅ is calculated from ideal solution theory as ̅ = × × × 1000/ (J kg -1 ), where is Boltzmann's constant, is the electron volt, is Avagadro's number and is the atomic weight of S (Gubbins et al. 2004). Solid is formed rapidly and subsequent changes in occur due to rearrangement of the solid fraction. We therefore assume that changes in occur on a timescale that is long compared to changes in . Then = (1 − ) and (neglecting pressure changes) This equation resembles the relation = used by Rückriemen et al. (2015), who estimated from an empirical liquidus curve. Relations (20) and (22) determine and . On the short timescale we neglect variations in . Then We must distinguish between the latent heat released on freezing of solid particles, , and the latent heat absorbed on remelting, . The total mass created, ∫ / , is equal to the mass destroyed; the only difference is that freezing occurs throughout the snow zone whereas remelting occur at . We therefore have = ∫ / , and = −∫ ( ) / . Substituting equations (13)- (17) and (20)-(23) into (12) allows the global energy equation to be written symbolically as The additional energy sources that arise due to iron snow are the latent heat released due to formation of solid, , latent heat absorbed as falling snow remelts, , gravitational energy released due to mixing of the remelted iron in the liquid region, and gravitational energy released due to the sinking of iron particles in the snow zone. All terms are proportional to the CMB cooling rate as determined by equations (13), (15), (22) and (23). The entropy equation is obtained from equation (1) in the usual way and is Viscous and chemical dissipations are thought to be much smaller than and so are neglected (Nimmo 2015). Equations (24) and (25) are evolved forward in time using a timestep of 1 Myr. The initial time is 4.5 Ga and the final time is the present-day unless a snow zone occupies the whole core in which case the calculation is terminated at that point. The thermal and chemical evolution of the coupled snow-liquid system is calculated such that the base of the snow zone is at the liquidus temperature at each time step. This evolution repeats at each model iteration as the core cools. Model Parameters Mantle convection sets while core convection sets the CMB temperature and so the evolution of the two systems should strictly be solved simultaneously. However, significant uncertainties in the parameterization of mantle convection, particularly the appropriate rheological law and the scaling of surface and CMB heat flow with temperature, mean that we do not expect to obtain a definitive thermal history for Mars but seek to understand whether the snow regime is potentially compatible with existing geodetic and magnetic observations. Focusing on the core alone allows us to elucidate the individual effects of the various physical processes that arise from snow zone growth. We consider two time-series of from previous studies that both match the inferred dynamo cessation time, but nevertheless exhibit significant differences that embody some of the uncertainties in the mantle problem. The time-series of Williams and Nimmo (2004) series by three piecewise linear segments that represent the initial rapid decline, the near-constant variation in recent times, and the intermediate transition period (Figure 2b). We vary core properties individually to elucidate their influence on the snow regime. This has the potential to produce an inconsistency since both W04 and L14 used a particular core model, which produced a particular time-series of that is compatible with their time-series of . To mitigate against this effect we set the initial CMB temperature, , to be close the original values. For W04 = 2400 K, while L14 did not quote a value and so we vary around the W04 value. We find that deviations in after 4.5 billion years of evolution differ by < 50 K from the original values in the majority of models. The paucity of independent observational constraints leads to some interdependencies between estimates of interior structure properties (e.g. assuming a temperature profile in order to estimate the density profile), but in this initial exploration we vary each parameter independently. Values of density , CMB radius and CMB pressure (Table A1) are selected from W04 and also from a recent detailed analysis of the Martian interior (Rivoldini et al. 2011) that produced a range of models constrained by moment-of-inertia and 2 Love number data. Rivoldini et al. (2011) find that varies by only 5 − 10% across the Martian core and so there is little error in taking constant. These values determine the structure of the Martian core, i.e. the radial profiles of gravity, gravitational potential and hydrostatic pressure (Figure 2). For each model the pressure scale constructed in this manner is used to establish the temperature profiles discussed below. The Martian core is thought to be composed primarily of an iron-sulfur alloy (Dreibus and Wanke 1985) and this simple chemistry has been used in almost all thermal history models to date (Breuer and Moore 2015). A more complicated core chemistry might be expected since the high temperatures achieved during the early stage of core formation may have facilitated the incorporation of Si, O, Ni, P, O, and H into liquid iron (Tsuno et al. 2007). The Martian core is expected to contain a negligible amount of Si (Sanloup and Fei, 2004) and O (Tsuno et al. 2007;Rubie et al. 2011), while Ni has only a minor effect on phase equilibria of Fe-S (Stewart et al. 2007). The amount of phosphorus in the Martian core is thought to be ten times that of Earth's core (0.16 vs. 0.02 wt% P2O5) (Dreibus and Wanke, 1985;Hart and Zindler, 1986). Both its abundance and the magnetic transitions of P-bearing phases may influence density distribution in the core (Gu et al., 2014), but we do not consider this additional complexity here. The density and melting point may also be lowered by the presence of hydrogen, though its content in the core is poorly constrained. In the absence of sufficient constraints regarding core equilibrium chemistry or a suitable theory for the melting point depression in ternary (or higher order) mixtures we model the evolution of a Fe-S mixture. The initial sulfur concentration is varied between 10 and 15%, which is within the estimates of previous models of the composition of Mars (e.g. Dreibus and Wanke, 1985;Taylor, 2013). The adiabatic temperature is parameterized by the equation = 0 (1 + 0.02 ), which fits the published profiles of Williams and Nimmo (2004) and Fei and Bertka (2005). The coefficient 0 varies as the core cools. Note that the adiabatic gradient / is proportional to and therefore decreases as the core cools (Figure 2c). Results Input parameters for all models are listed in Table A1 and diagnostics are presented in Table A2. We focus first on a model that uses the default parameters of W04 except for the melting curve and initial S concentration, which are set using the S07-10.6 profile. In this model the dynamo cessation time = 459 Myrs (~4 Ga), defined by falling below zero, while the present-day CMB temperature = 1822 K (Table A2, highlighted in bold); both values are very close to the original solution obtained by W04. Figure 3 shows profiles of temperature, solid fraction and S concentration for this model. Approximately 3.2 billion years into the evolution, falls below at the CMB and an iron snow layer begins to form and grows to 146 km by the present day. The solid fraction remains below ≈ 0.2 %, consistent with the modeling assumptions and with a recent model of iron snow in Ganymede's core (Rückriemen, et al., 2015), though the profiles of do not exhibit the curvature obtained by Rückriemen et al., (2015) near the base of the layer, which appears to stem from the different methods used to estimate / . The sulfur concentration increases across the snow zone by almost a factor of 1.5 at the present day, which arises partly due to the decreased S concentration in the deep core as Fe remelts and partly because of the enrichment in S with radius required to keep the snow zone on the liquidus. Figure 4 shows the contributions of individual terms to the energy and entropy balances for the model in Figure 3. The latent heat terms and make an order of magnitude larger contribution to the energy budget than the gravitational energy terms and in agreement with the study of Rückriemen et al. (2015), while is negligible. and almost balance since the same amount of mass is produced and destroyed; the small difference arises because the latent heat coefficient varies with depth. Therefore, these terms have little impact on the cooling rate at the onset of snow formation. The smallness of and reflects the slowness of compositional changes because the cooling rate is low and is a weak function of at conditions corresponding to the upper region of the Martian core. The high thermodynamic efficiency of and means that the corresponding entropies are comparable to and . Nevertheless, the overall entropy produced from the formation and remelting of iron snow is small (Figure 4) and the dynamo does not restart as long as the snow zone remains relatively thin. The dynamo only restarts when the entropy produced by gravitational energy release due to the remelting snow, , which is proportional to the snow zone volume and growth rate [equations (17) and (20)], overcomes the conduction entropy . Rückriemen et al. (2015) inferred that dynamo action arose in their models of Ganymede. This finding is not comparable to our results since they used scaling laws to assess the onset and maintenance of dynamo action, rather than the entropy formulation employed here. Table A2 shows that the dynamo restarts when − > 400 km in our suite of models. Figure 5 shows a solution obtained with the same parameters as the model in Figure 3, except with a lower value of (highlighted in italics in Table A2). Lowering reduces ̃ [equation (24)] and therefore leads to faster cooling at early times and an older snow zone. The effect of decreasing from 7000 kg m -3 to 6000 kg m -3 is significant, which might partly reflect the lack of feedback on due to changes in in our model; however, decreasing also decreases the difference in gravity, gravitational potential, pressure, and adiabatic temperature across the Mars' thermal history. The constraint that the Martian dynamo cannot restart (negative at the present-day) places a nominal upper bound on the thickness of the present-day snow zone of ∼ 400 km based on the limited model set available (Table A2). Occasionally, models with thick snow zones can produce thin layers below the CMB where exceeds the eutectic composition of ≈ 16 wt % at ≈ 20 GPa (Stewart et al. , 2007). The dynamics of this scenario are not included in our model, but it would produce light solid that floats to the CMB, thus reducing the estimates of gravitational energy compared to our calculations. However, the fact that such layers are very thin suggests that the associated entropy reduction would not prevent the dynamo from restarting. Figure 6 demonstrates the influence of parameter variations on the snow layer for models. Here the label for each symbol denotes the single quantity that was changed compared to the default model, which used the parameters highlighted in Table A1. The inferred dynamo cessation time ( ) is relatively insensitive to changes in , , and , but is very sensitive to changes in ; in Figure 6 the dynamo fails too late with = 30 W m -1 K -1 and too early with > 50 W m -1 K -1 . Aside from these cases all models in Figure 6 match the inferred dynamo cessation time, produce present-day CMB temperatures well above the eutectic value (Table A2) of 1300 − 1500 K at Martian CMB pressures (Rivoldini et al., 2011), and produce thin snow zones consistent with geodetic observations that suggest a predominantly liquid present-day core (Yoder et al., 2003). The iron snow regime is less likely to emerge for larger and certain time-series, which both cause the core cooling rate to decrease, though we obtained snow zones with all and values tested. Our results predict a strong sensitivity to ; however, this may be an artifact of the model assumption that does not change with . The crucial parameter is the initial S concentration 0 , which determines the melting temperature and therefore strongly influences the initial difference between adiabatic and liquidus temperatures at the CMB Finally we consider whether iron snow zones arise using the preferred interior structure model of Rivoldini et al. (2011) and other default parameter values in Table A1. These runs use 0 = 0.142 and the S07-14.2 melting curve (Table A2). The high values of 0 and do not favor iron snow formation, but we do find relatively thin present-day snow zones in models with ≈ 2250 K, approximately 150 K below the value used in W04. This value still suggests a core that was initially superheated with respect to the mantle, consistent with the original modeling assumptions, and we have not attempted to 'optimize' our solution through a systematic parameter search as the uncertainties in several key variables do not warrant such a procedure. This model has a relatively late dynamo cessation time of 3.6 Ga, but increasing to 50 W m -1 K -1 , which is well within uncertainty, provides an acceptable value of 4 Ga while leaving the snow zone evolution unchanged. Application of the snow model to the Martian core Relaxing the fast melting approximation (i.e. incorporating departures from phase equilibrium) introduces additional terms and equations into the slurry theory and drastically increases the complexity of the constitutive relations (Loper and Roberts, 1977). These additional effects require macroscopic parameterizations of microscopic processes (Loper, 1992) that are poorly understood and the resulting terms are hard to estimate for planetary cores. While the overall influence of fast melting is hard to quantify, we might expect that the effects may not be significant as long as the relaxation to phase equilibrium occurs on timescales that are much shorter than the long timescale of interest (Gyrs). Incorporating the effects of a multi-component solid phase also significantly complicates the theory by requiring that the history of individual particles is modeled. At present we believe both approximations are sensible compromises for modeling the long-term behavior of snow layers in planetary interiors. Rückriemen et al (2015) used scaling laws with simple assumed particle sizes and geometries to argue that the fast remelting approximation is appropriate for modeling iron snow in Ganymede's core. Solomatov and Stevenson (1993) analyzed the conditions required to perpetually suspend particles in a magma ocean, but our model does not predict quantities such as the convective velocity needed to apply their theory. If some of the solid particles remain suspended in the snow zone on long timescales the latent heat released on freezing, , will exceed the latent heat absorbed on remelting, . Since is released close to the CMB it has a low thermodynamic efficiency factor, suggesting a reduction in entropy available to power the dynamo compared to the fast remelting case considered in this paper. It therefore appears that relaxing the fast remelting assumption would not significantly change our results, though hydrodynamic simulations of slurry dynamics are needed to test the veracity of this statement. Thermal stratification at the CMB The demise of the Martian dynamo is signified by the Ohmic dissipation dropping below zero. However, ≥ 0 by definition and so negative values indicate an inconsistency in the modeling assumptions. The fact that < for most of the evolution suggests that thermal stratification ensues and the temperature profile deviates from the assumed adiabat profile near the top of the core in order to balance the entropy budget with = 0 after the dynamo fails. If = 0 prior to snow zone formation, the gravitational entropy terms , > 0 (Figures 4 and 5) that arise during snow zone growth would make > 0 and potentially restart the dynamo. Strictly, must exceed some minimum value, denoted , for dynamo action to occur. is hard to estimate because it depends on the magnetic field morphology in the core, including the small-scale fields and the field components that remain inside the core, neither of which can be observed. Using just the observable field at the CMB gives ~10 6 MW/K (Gubbins, 1975;Backus et al., 1996), similar to the values of and in our models (Figures 4 and 5). The real value of is likely to be higher than this (Nimmo, 2015), suggesting that snow zone growth would not restart the dynamo. Since the dominant contributions to come from small-scale magnetic fields inside the core it may be possible that some field generation accompanies snow zone formation but produces an extremely weak signal at the planet's surface. As discussed in Section 2, the thermally stable layer receives more heat through its base than can be removed at the CMB. Thus, the layer must heat up and the CMB temperature should be higher than predicted by our model, raising the question of whether the snow zone would still form. To address this issue we must first determine the relevant equations governing temperature variations in the conducting region. The temperature equation in a slurry ignoring pressure, radiogenic, and dissipative effects and assuming constant material properties is (Loper and Roberts, 1987) = ∇ 2 + ∇ ⋅ + where < 0 is the downward flux of solid material. The last two terms represent the total rate of change of solid mass per unit volume. A detailed analysis is complicated because depends on the size and distribution of solid particles. However, we observe that the fast melting approximation requires that solid freezes out quickly ( > 0) while fast remelting requires that solid falls out of the layer quickly (∇ ⋅ < 0) compared to the long timescale over which the temperature is changing. Since all solid leaves the layer after each timestep, on this long timescale the last two terms are expected to cancel out, leaving a standard diffusion equation for the temperature. To estimate the temperature difference between an adiabatic and thermally stratified region , we first consider an infinite half-space with prescribed time-independent subadiabatic heat-flux at the boundary and zero initial temperature (corresponding to no departure from an initial adiabatic profile). In this case, the solution to (27) without solid ( = = 0) gives a boundary temperature 0 that can be written (Carslaw and Jaeger, 1959) where = / is the thermal diffusivity. In Figure 4, a thermally stable layer starts to grow at time = 400 Myrs into the calculation and a snow zone formed at approximately = 3.2 billion years, giving = − = 2.8 Gyrs. At time , = 0.257 TW and = 0.875 TW, corresponding to the strongest subadiabatic conditions (Figure 4). With these values equation (27) shows that thermal conduction increases the CMB temperature by Δ = 0 ( ) − 0 ( ) ≈ 100K over 2.8 Gyrs above the value predicted from cooling on an adiabat. Using values for other models that match the cessation time for the Martian dynamo inferred from magnetic observations (Table A2) gives Δ = 100 − 250 K. The analytical expression (27) ignores the effects of spherical geometry, finite stable layer thickness and temporal changes in CMB heat flow. We account for these effects by numerically solving the 1D conduction equation / = −2 / ( 2 / ) in a spherical shell of thickness . Using the time-series of in Figure 4 and an initial adiabatic from this run at time we obtain Δ = 70 − 140 K for 100 ≤ ≤ 700 km. The cooling rate in our models is 70-150 K Gyrs -1 at the time of snow zone formation and the snow zones form 0.5-1.5 Gyrs before present, suggesting that snow zone formation would be delayed until the recent past. However, all of these Δ values are over-estimates because they ignore movement of the stable layer interface and omit the reduction in core cooling rate induced by stratification and by entrainment due to the underlying convection. We conclude that thermal stratification could delay, but not prevent, the onset of snow formation, though a more complete model of these effects is clearly needed. The adiabatic temperature profile used in our calculations ignores the effect of a stabilizing compositional gradient, the second term in the relation = − − ̅ . The first term, calculated directly from the models, is ≈ 0.6 − 1 K km -1 at the CMB. (Table A1) and / ≈ 4 × 10 −7 m -1 from Figure 2 means that the second term is ≈ 0.2 K km -1 . This is an overestimate since / depends on / in the model as discussed above, suggesting that the 'dry' adiabat assumed in the modelling is a good approximation to the 'wet' adiabat that includes compositional variations. Departures from an adiabatic temperature profile affect the energy budget mainly through the term since this involves an integral over . To quantify the effect, we consider for simplicity a linear subadiabatic profile in the top 100 km of the present-day core with the CMB temperature 140 K above an adiabat, corresponding to the most extreme estimates above. The resulting 5% decrease in produces a change in the cooling rate of ~1 K Gyr -1 . Changes in sulfur concentration and solid fraction will also decrease in the presence of thermal stratification as the terms are proportional to −1 (equations (22) and (23)), but the overall effect on core cooling rate, and hence snow zone growth rate, is very small compared to other uncertainties in the calculation. The term in the entropy balance is reduced by a greater amount that , but this effect is countered by a reduction in since the conduction profile is shallower and hotter than an adiabat, resulting in a small change to the predicted dynamo entropy. Even weaker effects are predicted at earlier times or for younger stable layers. These simple calculations suggest that the assumption of neglecting departures from the adiabat in the energy-entropy balance is justified. Finally, we expect that the presence of thermal stratification would reduce our estimates of the gravitational energy generated by migration of solid within the snow zone, though our calculations suggest that this term makes a negligible contribution to the overall energy and entropy budgets when the snow zone is only a few hundred km ( Figure 4). Nevertheless, the current parameterization of is simple at best and would benefit greatly from new experimental and/or numerical studies. Conclusions The presence of an approximately 100-km-thick snow layer at the top of the Martian core is consistent with the planets' magnetic history and available observational constraints on its core structure, temperature and composition. Snow zone nucleation is favored for lower initial sulfur concentrations and core temperatures and for smaller core sizes. Snow zones that grow to ≈ 400 km produce enough gravitational energy to restart the dynamo, suggesting that this is an upper limit on the layer depth in the Martian core. Future work simulating slurry dynamics with and without thermal stratification should enable improved parameterizations of the thermal and compositional profiles in the snow zone and the gravitational energy terms in the energy balance and therefore mitigate the relaxation of some assumptions invoked in this study. Future parameterized models could also include coupled coremantle evolution. Considering the core in isolation has allowed us to focus on snow zone dynamics, divorced from the complexities and uncertainties in mantle evolution modeling, but at the expense of being restricted to a narrow range of CMB heat flow time-series and initial core temperatures. In particular, if solutions satisfying the available constraints can be obtained with lower initial CMB temperatures it will be possible to obtain thicker present-day snow zones than we have found. Snow layers would not support seismic shear waves owing to the spatially dispersed nature of the solid phase, but could affect the core density. If these differences can be detected by observations from future spacecraft missions, it will provide profound inside into the thermochemical evolution of the Martian interior. Table A1. Input parameters used in the thermal history model. Gravity , gravitational potential and pressure are derived from the density assuming hydrostatic balance. Density is assumed depth-independent as interior structure models predict only 5 − 10% variation across the Martian core (Rivoldini et al. 2011); the constant value 7211 kg m -3 was sometimes used instead of the Williams and Nimmo value of 7011 kg m -3 as this accounts for the increase of with depth in their model. The thermal expansion coefficient , heat of reaction coefficient ̅ and volume change on melting Δ that appear in the governing equations are not included because the terms in the energy balance in which they appear are small enough to neglect. The latent heat is = Δ . Bold indicates the default value when multiple values have been used. Here W04 refers to Williams and Nimmo (2004), F05 is Fei and Bertka (2005) and S07 is Stewart et al. (2007).
10,526.6
2017-09-04T00:00:00.000
[ "Physics", "Geology", "Environmental Science" ]
A fuzzy rank-based ensemble of CNN models for classification of cervical cytology Cervical cancer affects more than 0.5 million women annually causing more than 0.3 million deaths. Detection of cancer in its early stages is of prime importance for eradicating the disease from the patient’s body. However, regular population-wise screening of cancer is limited by its expensive and labour intensive detection process, where clinicians need to classify individual cells from a stained slide consisting of more than 100,000 cervical cells, for malignancy detection. Thus, Computer-Aided Diagnosis (CAD) systems are used as a viable alternative for easy and fast detection of cancer. In this paper, we develop such a method where we form an ensemble-based classification model using three Convolutional Neural Network (CNN) architectures, namely Inception v3, Xception and DenseNet-169 pre-trained on ImageNet dataset for Pap stained single cell and whole-slide image classification. The proposed ensemble scheme uses a fuzzy rank-based fusion of classifiers by considering two non-linear functions on the decision scores generated by said base learners. Unlike the simple fusion schemes that exist in the literature, the proposed ensemble technique makes the final predictions on the test samples by taking into consideration the confidence in the predictions of the base classifiers. The proposed model has been evaluated on two publicly available benchmark datasets, namely, the SIPaKMeD Pap Smear dataset and the Mendeley Liquid Based Cytology (LBC) dataset, using a 5-fold cross-validation scheme. On the SIPaKMeD Pap Smear dataset, the proposed framework achieves a classification accuracy of 98.55% and sensitivity of 98.52% in its 2-class setting, and 95.43% accuracy and 98.52% sensitivity in its 5-class setting. On the Mendeley LBC dataset, the accuracy achieved is 99.23% and sensitivity of 99.23%. The results obtained outperform many of the state-of-the-art models, thereby justifying the effectiveness of the same. The relevant codes of this proposed model are publicly available on GitHub. www.nature.com/scientificreports/ information may remain unused. Keeping this fact in mind, in this work, we propose a novel approach where we utilize all the information available from different base learners by quantifying two important parameters-the closeness of the prediction probability to 1 and deviation of the prediction probability from 1. Moreover, our approach fuses all such quantified values for making the final prediction so that it can deal with the classification problem under consideration more effectively and make a fairly accurate prediction. Ensemble learning is one such alternative where decision scores from multiple classifiers are fused to predict the final class label of an input sample. An ensemble model is aimed to capture the salient features of all its constituent models thus performing better than the individual base classifiers. Such models are robust since ensembling diminishes the dispersion or spread of the predictions made by the base models. The variance in the prediction errors of the base classifiers gets reduced in the ensemble model by the addition of some bias to the competing base learners. In the present work, we formulate a fusion strategy that uses the decision scores obtained by three base Convolutional Neural Network (CNN) classifiers, namely, Inception v3 by Szegedy et al. 4 , Xception by 5 and DenseNet-169 by Huang et al. 6 (pre-trained on the ImageNet dataset 7 ) to form the ensemble. We use a fuzzy ranking-based approach, where the probability scores are subjected to two non-linear functions, an exponentially decaying function, and the tanh function, to assign the ranks to the class probabilities predicted by a base learner. The ranks assigned by the two non-linear functions are multiplied. The same process is repeated for each base learner, and the rank products from each classifier are added to get the final ranks. We use two different functions of different concavities so that they can generate complementary results. Fusion entails consolidating the multiple ranks associated with an identity and determining a new rank that would aid in establishing the final decision. The main motive of using two ranks is to consider the closeness to and deviation from the expected result corresponding to the primary classification result. Lesser deviation corresponds to a lower value of the product and a better result. So, the class having the lowest value of this sum of products of ranks is deemed as the predicted class of the ensemble model. Here, the two non-linear functions have opposite concavity in the range [0, 1] and hence a higher confidence score results in a larger value of rank in one function and a smaller value in the other, and our aim to minimize this product. If the confidence score of a prediction is high, then this sum of products yields a lower value than if the confidence score is low which are explained in detail later. Several methods have been developed over the years for the automatic classification of cervical cancer using cytology images. Traditional machine learning-based methods [10][11][12] , although computationally less complex, require extraction of handcrafted features, and feature selection for classification. This limits the performance of such models because of the two main reasons: (1) extraction of handcrafted features becomes difficult for complex data pattern, and (2) all these features may not be sufficiently informative, thus adversely affecting the model's performance. However, Win et al. 13 's method yielded commendable performance. They used a shape-based iterative method for nuclei detection followed by employing a marker-control watershed approach for separating overlapping cytoplasm. The authors performed feature extraction from these segmented nuclei and used a Random Forest classifier for feature selection. They achieved a classification accuracy of 94.09% on the SIPaKMeD dataset by Plissiti et al. 9 by ensembling traditional classifiers like Linear Discriminant Analysis (LDA), and Support Vector Machine (SVM), etc. Deep learning-based methods can avoid the aforementioned limitations of traditional machine learning techniques in the following ways: (1) deep learning models perform end-to-end classification without the need for feature engineering; (2) self-learning is induced in these models, thereby making the models effective to learn complex patterns in datasets. CNNs are prevalent for classifying image data, for example, Zhang et al. 14 performed end-to-end classification using a deep CNN architecture and evaluated their method on the HErlev dataset achieving an accuracy of 98.3%. CNN models learn to extract invariant features automatically using the convolution of image and filters, have translational invariance, and they perform better than machine learning or image processing methods, making them popular. However, deep learning models require a large amount of labelled data for producing satisfactory results, but such large volumes of medical data are difficult to acquire since experts (doctors or pathologists) are needed to classify the acquired data. So a popular concept, called transfer learning is used where a deep learning model trained on a large dataset is re-used for classification on the current data. Li et al. 15 performed transfer learning using the Inception v3 deep CNN model on a cervical immunohistochemistry image dataset and obtained only 77.3% accuracy. Ensemble learning is a strategy that considers decisions obtained from more than one model for making the final decision. Some simple fusion schemes have been explored in literature like Sarwar et al. 16 who used an average probability-based ensemble and Xue et al. 17 who used a majority voting based ensemble technique. However, such simplistic ensemble models do not take into account the confidence of predictions and use pre-determined or fixed weights associated with the base learners. Keeping this in mind, in this research, we propose a novel ensemble technique which fuses the decision scores from three base CNN based classifiers, namely Inception v3 4 , Xception 5 and DenseNet-169 6 while taking into account the confidence in predictions of the base learners. Motivation and contributions. The tedious detection process of cervical cancer makes it impossible to conduct regular screening throughout the population. In this paper, we propose an automated screening framework that is both accurate and time-efficient. Since the data available in the biomedical domain is scarce, an endto-end classification system using purely deep learning methods may fail to perform satisfactorily on unseen data. So, we use three transfer learning-based CNN classifiers to form an ensemble model where the predictions from multiple competing models are taken into account. Although simple fusion schemes like majority voting, weighted averaging, etc., have been used in literature, they do not consider the confidence in the predictions of a classifier while computing the predictions. In the proposed method, we develop a mathematical model that www.nature.com/scientificreports/ considers this, thus achieving superior classification performance than conventionally used simple ensemble methods. The overall workflow of the framework is shown in Fig. 1. The contributions of the current research work are as follows: 1. Ensemble learning using three bases learners namely, Inception v3 4 , Xception 5 and DenseNet-169 6 has been implemented that boosts the performance of the overall model for making predictions on the scarce available data. 2. The proposed ensemble method applies two non-linear functions of different concavities to determine the fuzzy ranks of the classes in the decision scores. The sum of products of the ranks of the three base learners are computed and the lower rank is attributed as the predicted class. The use of two non-linear functions ensures that the confidence in the predictions of the classifiers is accounted for in the computation of the ranks, thereby leading to superior predictions. 3. The way we quantify the deviation of the predicted value from the expected value is novel. Also, the boost in accuracy brought by proposed ensemble model is noteworthy. 4. The proposed framework outperforms many state-of-the-art methods on two benchmark cervical cytology image datasets: the SIPaKMeD Pap Smear dataset by Plissiti et al. 9 and the Mendeley Liquid Based Cytology (LBC) dataset by Hussain et al. 18 in terms of classification accuracy and sensitivity. 5. To justify the robustness in performance of the proposed ensemble framework, it has been tested on an additional multi-class medical image dataset: the Zenodo 5K dataset and the results obtained prove the superiority of the ensemble approach. Proposed method In this section, we give a brief overview of the base learners we use and the necessary customization we apply to the basic models, followed by the implementation detail of the proposed fuzzy rank based fusion of confidence scores of the base learners. Here our motive for ensembling is to utilize each of the confidence factors generated from base learners fully by mapping them into non-linear functions. One of the mapped values signifies the abidance or closeness to 1 and the other one signifies the deviation from 1. This proposed approach overcomes the shortcoming of the conventional ranking methods which do not consider the fact mentioned above 19,20 , and this may lead to an incorrect result. In the present study, we use three base learners and evaluate our method on bio-medical image datasets. Initially, we train the base learners (customization with pre-trained models trained on ImageNet 7 ) and take the confidence scores. After that, we map the scores on two different functions having different concavities to generate non-linear fuzzy ranks and generate a fused score by combining these two ranks, which helps us to quantify the total deviation from expected. Lesser the deviation shows better confidence 8 , and the pap stained image under "Input Images" has been taken from the publicly available SIPaKMeD Pap Smear dataset 9 used in this research and the complete image has been made by R.K. using Google Slides). www.nature.com/scientificreports/ towards a particular class. The class having the lowest deviation value is considered as the winner and is assigned as the final class value. Here, we first give a brief overview of the pre-trained CNN models used as base learners. Inception v3. The most salient feature of the Inception v3 architecture developed by Szegedy et al. 4 is the numerous parallel convolutions supported by the structure. This allows deep features to be generated while controlling the overfitting problem while using lesser computation than monolithic architectures like VGG-19. Figure 2 shows the architectural diagram of the Inception v3 CNN model. Xception. The Xception architecture developed by Chollet et al. 5 has been inspired from the Inception v3 architecture, consisting of the same number of model parameters as the latter, but the Xception architecture uses them more efficiently. They showed that pointwise convolutions and depthwise separable convolutions lie at the two extremes of a discrete spectrum, where the inception modules lie in the middle. Thus, they replaced the inception modules with depthwise separable convolutions, which provided a boost in the classification performance while incurring the same computation cost. The basic structure of the Xception model is shown in Fig. 3. DenseNet-169. The DenseNet architectures by Huang et al. 6 are distinctive, in the sense that they provide a rich feature representation while also computationally efficient. The reason for that is, each layer in the DenseNet model is a concatenation of the feature maps in the current layer and all its preceding layers, as shown in Fig. 4. This makes the model compact since fewer channels are accommodated in the convolutional layers thus decreas- Cascade of pre-trained model and customized layers. For better utilization of the information generated by pre-trained models, we add some customized layers based on the structure of the models. Next to the pre-trained models, we add a fully connected layer of 1024, 1028 and 256 nodes for Inception v3, DenseNet-169 and Xception respectively. This fully connected layer is associated with the Rectified Linear Unit (ReLU) activation function to overcome the vanishing gradient problem and faster learning. Then a dropout layer of 20% is added to avoid the problem of overfitting. If we directly calculate the confidence scores from such a high number of hidden units, we may lose some important information. To address this issue, at first, we cluster the necessary information into a lesser number of hidden nodes such as 128, 64, and 32 nodes for Inception v3, DenseNet-169 and Xception respectively. Then at the end, we implement class number specific output units. The hyperparameters used for training the CNN models have been set through extensive experiments and are shown in Table 1. The number of epochs used for fine-tuning the datasets has been set to 20, because the model weights are already optimized for image classification through pre-training on the ImageNet data, and we only need to train the customized layers that have been added to the CNN models, while keeping the weights of the other (pre-trained) layers fixed. Proposed ensemble approach. In this section, we detail the mathematical formulation for the proposed ensemble method. Let the confidence scores for C number of classes given by base learner i are ( P i 1 , P i 2 , P i 3 , ... , P i C ), here i = 1, 2, 3. At first, we accumulate all the confidence scores obtained from each of the base learners. As ( P i 1 , P i 2 , P i 3 , ... , P i C ) represent probabilities, essentially it will follow Eq. (1). .. , R i 2 C ) are fuzzy ranks generated by using the two non-linear functions. The fuzzy ranks are calculated by Eqs. (2) and (3). Fig. 5. Equation (2) provides a reward for a classification. If x approaches 1, then the value of Eq. (2) increases i.e., the amount of reward increases. Conversely for Eq. (3), when we calculate deviation from 1, i.e., if x approaches 0, the deviation will be more. Let ( RS i 1 , RS i 2 , RS i 3 , ... , RS i C ) be the fused rank scores, where RS i k is given by Eq. (4). (2) Figure 5. The non-linear functions used to generate fuzzy ranks in the proposed ensemble framework. x denotes the probability of a class of a sample data. (a) Quantifies the deviation from its objective for a class having prediction probability. Deviation decreases when x decreases. Eventually it becomes 0 when Quantifies the reward to be given to a class having prediction probability x. Reward increases when x increases. Eventually it becomes 1 when x = 1. www.nature.com/scientificreports/ is concave downward in its domain of definition [0, 1] for this study. As the negative of this function is a matter of concern, it will be concave upward. Because of its negative gradient in [0, 1], the output rank score will try to shift towards 1. is concave upward in its domain of definition [0, 1] for this study. As the negative of this function is a matter of concern, it will be concave downward. Because of its positive gradient in [0, 1], the output rank score will try to shift towards 0. The rank score is the product of reward and deviation for a particular confidence score obtained from a base learner. As the range of Eq. (3) is less than the range of Eq. (2), the nature of the product will be governed by Eq. (3). Lesser deviation calculated from the confidence score implies a lesser rank score. Finally, the rank scores are the only matter of concern for calculating the fused scores. This RS i k will signify how confidence level towards a particular class as this is the product of fuzzy ranks generated by the two different types of functions. Now the fused score tuple is ( FS 1 , FS 2 , FS 3 , ..., FS C ), where FS k is given by Eq. (5). This fused score can be realized as the final score corresponding to each class. We then find the class which has the least fused score and consider it as the winner using Eq. (6). The computational complexity for the fusion strategy is O(number of classes). From the plot of the product of two rank generating functions, shown in Fig. 6, it is clear that the final rank decreases with an increase in confidence (probability) score, which is proof of correctness. The flow diagram of the proposed ensemble method is shown in Fig. 7. Figure 8 shows an example of the proposed method for an image from the Mendeley LBC dataset (4-class). Here for an image belonging to class 2, we collect the probability values from the three base learners for each of the four classes, shown in Fig. 8a-c respectively. The probability value belonging to class 1 given by Inception v3 is 0.261. So the corresponding ranks are 0.735 and 0.238 as obtained from Eqs. (2) and (3). Essentially the rank score becomes 0.175 by Eq. (4). Similarly, we calculate rank scores for each of the three base learners for four classes. We get 0.175, 0.134 and 0.148 as the rank scores for class 1 from Inception v3, Xception and DenseNet-169 respectively. The fused score becomes 0.458 by Eq. (5). Similarly 0.426, 0.594, and 0.588 (refer to "Fused Score" column of Table (d) of Fig. 8) are the fused scores for classes 2, 3 and 4 respectively. We can see that the winner made by Inception v3 and DenseNet-169 is class 2, but by Xception it is class 1. Here our fusion method works properly and makes a robust decision. The overall fused score is minimum for class 2, so by Eq. 6, the predicted class is 2, which is mentioned at the beginning of this explanation. Results and discussion In this section, we have reported the results by evaluating the proposed ensemble model on two publicly available datasets and discussed the significance of the results obtained. We have also compared the performance of the proposed model with many existing methods to ensure the superiority of the proposed method. Dataset description. In the current research, we have used two publicly available benchmark datasets, namely, the Mendeley Liquid Based Cytology (LBC) dataset proposed by Hussain et al. 18 and the SIPaKMeD Pap Smear dataset proposed by Plissiti et al. 9 to evaluate the performance of the proposed ensemble framework. Table 2 and some examples images from the dataset are shown in Fig. 9. SIPaKMeD pap smear dataset. The SIPaKMeD pap smear dataset 9 consists of 4049 isolated cervical cell images. The cells are unevenly distributed among five different classes, classified by the experts. Normal cells are divided into two categories, namely "Superficial-Intermediate" and "Parabasal", while abnormal (but not malignant) cells are categorized into "Koilocytes" and "Dyskeratotic", and the final category is benign or "Metaplastic" cells. The distribution of images in the dataset is shown in Table 3 and some examples of images from the dataset are shown in Fig. 10. Evaluation metrics. To validate the performance of the proposed model, we have used four popular evaluation criteria: Accuracy, Precision, Recall and F1-Score. In a binary classification problem, suppose the two (FN) refers to a sample belonging to the positive class but classified as being part of the negative class. Now, extending these measures to a multi-class problem with say N classes generates a confusion matrix, say C, in which the columns represent the true class and rows represent the predicted class. The mathematical expressions of the evaluation metrics obtained from the confusion matrix C are thus given by Eqs. (7), (8), (9) and (10). Accuracy: Figure 9. Examples of images from the Mendeley LBC dataset 18 . HSIL high squamous intra-epithelial lesion, LSIL low squamous intra-epithelial lesion, NIL negative for intra-epithelial lesion, SCC squamous cell carcinoma. Implementation. Table 4 shows the results obtained by the proposed ensemble framework on the publicly available datasets used in this work on the 5-fold cross-validation experimental setting. The results confirm that the proposed model achieves high classification accuracy and sensitivity, while also being much faster than the current manual screening procedure justifying the reliability of the automated approach. Table 5. The proposed combination of Inception v3, Xception and DenseNet-169 obtains the best result on all the three datasets and is significantly better than the secondbest performance obtained by the ensemble of Inception v3, VGG-16 and DenseNet-169. The performance of an ensemble depends more upon the ability of the base learners to provide complementary information, than the individual performance of the base learners. Clearly, the three classifiers used in this research are better suited for the ensemble than the other tested combinations. The proposed framework can be used as a plug-and-play model where new test images can be passed through the model to generate the predictions through the ensemble scheme, and this will eventually help the expert clinicians to make a quicker and accurate decision. For testing on new test samples, about 5 seconds are required per image. So, the proposed CAD method is reliable for use in the field. All the base models are generated by customizing the pre-trained models, and all the pre-trained models have a sufficient number of convolution layers. Hence, we do not require to add more convolution layers in our Table 6 that our model performs well in all the datasets we have tested on. To prove that the model is not overfitted even after being trained on a smaller dataset, we have provided loss curves Fig. 13 for base learners. A decrease in the validation loss along with training loss is prominent in the provided loss curves for the base learners. It indicates that the base learners we have fine-tuned perform robustly and are not overfitted. Table 5. Results obtained on ensembling various combinations of base learners on all the three datasets used in this study. Model-1 Model-2 Model-3 Ensemble result (classification accuracy %) www.nature.com/scientificreports/ Comparison to state-of-the-art. Table 6 shows the classification results obtained by the base classifiers and their ensemble using the proposed ensemble technique. In the SIPaKMeD Pap Smear dataset, the Inception v3 model performs better than the Xception and DenseNet-169 models, whereas, the Xception model performs better than the other two in the Mendeley LBC dataset. The proposed ensemble method performs significantly better than all the base classifiers in both datasets. This indicates that the classification capability of different CNN models has some dependency upon the dataset under consideration: Inception v3 performs better for single-cell images dataset, while Xception performs better for the whole slide images dataset; but the proposed ensemble method performs robustly by considering the confidence score from all its base learners. Thus the ensemble model can be generalized better than a single CNN classifier. Figure 14 shows the results of some standard CNN models obtained on the datasets, compared to the proposed ensemble framework. Some fusion schemes are popularly used in literature, like majority voting, probability averaging, and weighted probability averaging, etc. Figure 15 shows the comparison of the proposed ensemble scheme to some of these popular ensemble techniques that have been used in literature, using the same base classifiers: Inception v3, Xception and DenseNet-169. In both datasets, the weighted probability averaging technique gives classification results closest to the proposed ensemble technique, wherein the weights have been determined experimentally. But, this is a static process, since, after the selection of the weights, there is no scope for dynamically refactoring the weights at prediction time. The proposed ensemble model, however, assigns ranks to the classifiers on each test sample based on the confidence in predictions by the base learners, which leads to superior classification performance. Table 7 compares the proposed approach with some state-of-the-art results on the datasets. No published work has been found on the Mendeley LBC dataset at the time of writing this manuscript for comparison. Mendeley LBC SIPaKMeD 2-Class SIPaKMeD 5-Class Error analysis. Figure 16 shows some examples from the SIPaKMeD Pap Smear dataset where one or more base classifiers made wrong predictions on the sample, but the ensemble made the correct predictions. Figure 16a is a sample from the "Metaplastic" class of the SIPaKMeD dataset, which is classified as "Koilocytotic" by the DenseNet-169 with the confidence of 31%, and "Parabasal" by the Xception model with the confidence of 36%. However, being classified as "Metaplastic" by the Inception v3 model with 98% confidence allowed the ensemble to predict the sample correctly. Similarly, the sample in Fig. 16b, originally of class "Parabasal" is misclassified as "Koilocytotic" by the DenseNet-169 model with the confidence of 32% while the Xception and Inception v3 models predicted correctly with confidence scores of 95% and 97% respectively, thus allowing the ensemble to predict the sample correctly as "Parabasal". Figure 16a has multiple nuclei in its image and the cytoplasm in Fig. 16b is not distinguishable. Although both the test samples had a bad image quality, the proposed framework was able to correctly classify them, justifying the robust performance of the model. www.nature.com/scientificreports/ Figure 17 shows some test samples from the SIPaKMeD Pap Smear dataset that were misclassified by the proposed framework. Figure 17a shows a sample from the "Metaplastic" class which is misclassified as "Parabasal". The nucleus in the image is not distinguishable from the cytoplasm leading to an incorrect classification by the ensemble model. Figure 17b shows an image belonging to the "Superficial Intermediate" class, but misclassified as "Koilocytotic". The reason for this might be the intrusion of another Superficial Intermediate cell in the image on the top right corner. This unwanted cell is not completely included in the image and only part of the cytoplasm is visible. This leads to an erroneous nucleus to cytoplasm ratio, leading the framework to classify the image as a "Koilocytotic" class. Statistical analysis. To statistically analyse the viability of the proposed ensemble framework concerning the base learners used to form the ensemble, McNemar's statistical test 24 is performed. McNemar's test is a nonparametric analysis of paired nominal data distribution. The " p − value " signifies the probability of two models being similar, thus, a lower p − value is desired. To reject the null hypothesis that the two models are similar, the p − value needs to be smaller than 5% that is, if p − value < 0.05 , we can safely say that the two models under consideration are statistically different. From Table 8, it can be concluded that in both the datasets (and in both settings of the SIPaKMeD pap smear dataset), the null hypothesis is rejected, that is, the ensemble model is markedly different from the base learners. Additional test. To further justify the robustness of the proposed ensemble framework, we evaluate it on an 8-class colorectal cancer histopathology dataset: the Zenodo 5K dataset 25 . The distribution of images in the dataset is tabulated in Table 9. www.nature.com/scientificreports/ Table 10 shows the results obtained upon evaluation using the fivefold cross-validation scheme. From the table, it can be noted that the ensemble of the classifiers yield results significantly better than its constituent base learners in this multi-class data arrangement, justifying that the proposed ensemble method is robustly boosting the performance of the base learners. Comparison of the results obtained by the proposed method and some state-of-the-art methods are tabulated in Table 11, where the proposed ensemble method is seen to outperform the previous methods by a significant margin. 9 where one or more of the base classifiers predict incorrectly, but the ensemble predicts correctly. (a) DenseNet-169 classifies the sample as: "Koilocytotic" with confidence 31%, Xception classifies the sample as: "Parabasal" with confidence 36% and Inception v3 classifies the sample as: "Metaplastic" with confidence 98%. Ensemble prediction is: "Metaplastic". (b) DenseNet-169 classifies the sample as: "Koilocytotic" with confidence 32%, Xception classifies the sample as "Parabasal" with confidence 95%, and Inception v3 classifies the sample as "Parabasal" with confidence 98%. Ensemble prediction is: "Parabasal". Conclusion and future work Cervical cancer is one of the leading causes of mortality among women, whose population-wide screening is restricted due to the expensive and laborious detection process demanding the expertise of clinicians for detection. In this paper, we develop a CAD framework that classifies cytology images using an ensemble of three standard CNN based classifiers. The proposed ensemble model generates ranks of the classifiers using two non-linear functions which help to take into account the confidence in predictions of the base learners. The proposed CAD framework, when evaluating two benchmark datasets for cervical cytology classification, produces competitive results in terms of accuracy and sensitivity to the disease, thus justifying the effectiveness of the framework. The fast detection tool developed can function like a plug-and-play model that requires little intervention of the expert clinicians for cervical cancer screening, and hence suitable for incorporation in the field. As discussed previously, some of the images could not be accurately classified by the proposed ensemble model, due to poor image contrast or the presence of overlapping cells. So there might be a need for preprocessing of the images, which we would like to address in the future. We may try contrast enhancement techniques or prior segmentation of cells for isolating overlapping cells. We may also consider ensembles of other base learners, and explore different rank generation functions to perform the ensemble.
7,163.8
2021-07-15T00:00:00.000
[ "Medicine", "Computer Science" ]
codY and pdhA Expression Is Induced in Staphylococcus epidermidis Biofilm and Planktonic Populations With Higher Proportions of Viable but Non-Culturable Cells Staphylococcus epidermidis biofilm cells can enter a physiological state known as viable but non-culturable (VBNC), where, despite being alive, they do not grow in conventional laboratory media. As such, the presence of VBNC cells impacts the diagnosis of S. epidermidis biofilm-associated infections. Previous transcriptomics analysis of S. epidermidis strain 9142 biofilms with higher proportions of VBNC cells suggested that the genes pdhA, codY and mazEF could be involved in the induction of the VBNC state. However, it was previously demonstrated that VBNC induction is strain-dependent. To properly assess the role of these genes in VBNC induction, the construction of mutant strains is necessary. Thus, herein, we assessed if VBNC cells could be induced in strain 1457, a strain amenable to genetic manipulation, and if the previously identified genes were involved in the modulation of the VBNC state in this strain. Furthermore, we evaluated the formation of VBNC cells on planktonic cultures. Our results showed that despite being commonly associated with biofilms, the proportion of VBNC cells can be modulated in both biofilm and planktonic cultures and that the expression of codY and pdhA was upregulated under VBNC inducing conditions in both phenotypes. Overall, our study revealed that the formation of VBNC cells in S. epidermidis is independent of the mode of growth and that the genes codY and pdhA seem to be relevant for the regulation of this physiological condition. INTRODUCTION Staphylococcus epidermidis is now considered an opportunistic pathogen responsible for many healthcare-associated infections, mainly those related to biofilm formation on indwelling medical devices (Mack et al., 2013). S. epidermidis biofilm-associated infections are often recurrent leading to high rates of morbidity (Otto, 2009). The failure of antibiotics to cure these types of infections is primarily associated with the poor capacity to eradicate biofilms, which are known to have bacterial cells with distinct physiological states (Rani et al., 2007), including viable but non-culturable (VBNC) cells (Cerca et al., 2011a;Zhang et al., 2018;Li et al., 2020). Although VBNC cells cannot grow on standard growth media, these cells present a reduced metabolic activity, replication rate and gene transcription (Lleo et al., 2000;Zhang et al., 2018). For this reason, their detection with traditional culture-based methods and, consequently, the diagnosis of S. epidermidis biofilm-related infections is hindered (Zandri et al., 2012). Moreover, S. epidermidis VBNC cells can be more tolerant to antibiotics , limiting the efficacy of current treatment options. As such, the study of the mechanisms underlying the development of this physiological state in S. epidermidis is of utmost importance. An in vitro model aimed to induce the formation of VBNC cells in S. epidermidis biofilms, developed on strain 9142, demonstrated that the addition of a high concentration of glucose (1%) increased the proportion of VBNC cells, while the addition of magnesium chloride (MgCl 2 , 20 mM) prevented the formation of VBNC cells. This modulation resulted in biofilms with approximately the same number of viable cells but about 1 log difference in the number of culturable cells (Cerca et al., 2011a;Carvalhais et al., 2014). Importantly, although glucose enrichment was associated with medium acidification, it was formerly demonstrated that the prevention of the VBNC state is a pH-independent phenomenon since the addition of MgCl 2 does not prevent acidification of the medium (Cerca et al., 2011a). The applicability of this model to induce the formation of VBNC cells in S. epidermidis biofilms was further analysed in 19 clinical and 24 commensal isolates (Carvalhais et al., 2018). Most of the clinical isolates tested (70%) showed at least a 0.5 log 10 decrease in culturability, whereas only 33% of the commensal isolates presented a similar reduction, suggesting that VBNC cells induction by glucose and prevention by magnesium chloride is not a universal phenomenon amongst S. epidermidis isolates. A possible explanation for this isolate-specific response may be related to transcriptomic changes, for instance, in genes involved in metabolism and oxidative stress (Keren et al., 2011;Carvalhais et al., 2014;Postnikova et al., 2015). Therefore, the analysis of genes whose products could be associated with the regulation of this physiological state is crucial to underpin the mechanisms behind its emergence. Previously, using an RNA-Sequencing (RNA-Seq) approach, we have detected changes in the transcription of the genes codY, mazE, mazF and pdhA in biofilms with a higher proportion of VBNC cells, suggesting that these genes could be linked to the emergence of the VBNC state in S. epidermidis strain 9142 . To confirm this hypothesis, the study of strains lacking the genes of interest is essential. S. epidermidis is known to be difficult to be genetically manipulated, with only a few strains known to be amenable (Mack et al., 2001;Winstel et al., 2015;Winstel et al., 2016;Galac et al., 2017), being S. epidermidis strain 1457 the most frequently used for mutagenesis studies. Unfortunately, earlier studies aiming to modulate VBNC cells proportions in S. epidermidis biofilms did not include such strain. Therefore, to determine if S. epidermidis 1457 could be a suitable candidate for the study of the mechanisms behind the formation of VBNC cells, the ability of this strain to form VBNC cells needs to be investigated. Thus, herein, we tested the induction of VBNC cells in biofilms formed by strain 1457, using the previously optimized in vitro VBNC cells induction model, and assessed the expression of the genes codY, mazE, mazF and pdhA. Additionally, we investigated the suitability of this model in planktonic cultures, further exploring the potential role of the previously identified genes in the VBNC state mediation in planktonic cells. Strains and Growth Media S. epidermidis 1457, a strain isolated from a venous catheterassociated infection (Mack et al., 1992) was used for this study, together with the commensal isolate COM040A (skin sample from a healthy volunteer) and the clinical isolate PT11004 (isolated from a bloodstream infection), which were previously shown to accumulate high amounts of VBNC cells (Carvalhais et al., 2018). Additionally, to validate previous RNA-Seq data, strain 9142 was also included. For each experiment, all S. epidermidis strains were grown directly from the glycerol stocks (30% glycerol) in tryptic soy broth (TSB, Merck, Darmstadt, Germany) at 37°C and with shaking at 120 rpm (10 mm orbit shaker). The optical density (OD) of overnight suspensions was adjusted, at 640 nm, to 0.25 ± 0.05, corresponding to ≈ 2 × 10 8 colony-forming units/mL (CFU/mL) (Freitas et al., 2014), to be used as inoculum for all the experiments subsequently described. Biofilm Cultures Biofilms were formed, in 24-well plates (Orange Scientific, Braine-l'Alleud, Belgium), by inoculating 10 μL of overnight suspensions, previously adjusted to OD 640nm = 0.25 ± 0.05, into 1 mL of TSB supplemented with 0.4% glucose or 0.4% glucose plus 20 mM MgCl 2, for further induction and prevention of the VBNC state, respectively . The plates were then incubated for 24 h at 37°C and 120 rpm. After that period, the spent media was removed and replaced by fresh TSB supplemented with either 1% glucose (induced VBNC state) or 1% glucose plus 20 mM MgCl 2 (prevented VBNC state), and grown under the same temperature and agitation conditions for additional 24 h. Thereafter, the biofilm bulk fluid was removed, and the biofilms washed twice with 500 μL of 0.9% NaCl. Finally, biofilms were scraped from the plate bottom and suspended in 1 mL of the same saline solution. Planktonic Cultures For the analysis of VBNC cells formation in planktonic cells, 24 h and 48 h cultures, grown in 10 mL erlenmeyers, were evaluated. In the case of 24 h planktonic assays, a 1:100 dilution of the overnight growth was performed in TSB supplemented with 1% glucose (induction of the VBNC state) or 1% glucose plus 20 mM of MgCl 2 (prevention of the VBNC state) and incubated at 37°C and 120 rpm for 24 h. The second approach (48 h growth) aimed to mimic the conditions used for biofilms growth. This consisted of pre-growing bacteria with TSB supplemented with 0.4% glucose (plus 20 mM of MgCl 2 for prevented VBNC condition) for 24 h, followed by centrifugation of the cells (16.000 g, 10 min, 4°C) and replacement of the spent medium with fresh TSB supplemented with 1% glucose or 1% glucose plus 20 mM MgCl 2 . The suspensions were then grown for additional 24 h under the same temperature and agitation conditions. Assessment of VBNC State Induction in Both Biofilm and Planktonic Cultures At the selected time points, bacterial cells from either biofilms or planktonic cultures were collected and sonicated for 10 s at 33% amplitude (Ultrasonic Processor Model CP750, Cole-Parmer, IL, USA) to dissociate cells clusters and create a homogeneous biofilms cells suspension. Importantly, the selected sonication cycle has no significant effect on cells viability, as previously determined by CFU counting and propidium iodide incorporation (Freitas et al., 2014). The quantification of the total amount of suspended cells was performed by OD 640nm measurements, as previously shown (Freitas et al., 2014). Viable cells were quantified by flow cytometry using SYBR Green (1:80000)/propidium iodide (20 μg/mL) staining as previously optimized (Cerca et al., 2011b). Samples were acquired in an EC800 ™ flow cytometer (Sony Biotech, CA, USA), with a flow rate of 10 μL/min and a total of 100000 events were acquired for each sample. Data analysis was performed using FCS Express 6 and considering the populations SYBR + /PI -(live cells) and SYBR + /PI + (live cells somewhat permeable to PI), and excluding SYBR -/PI + (dead cells). Finally, the number of culturable cells was determined by CFU counting. Briefly, serial dilutions were performed in 0.9% NaCl and 5 μL of each dilution were plated on TSA plates. Plates were incubated at 37°C for at least 16 h. The analysis of the proportion of VBNC cells in both biofilm and planktonic cultures was determined as the ratio (%) between the values obtained for induced and prevented conditions (IND/PRE), in terms of culturability (CFU) or OD. RNA Extraction For total RNA isolation from biofilm cells, the bulk fluid of the biofilm culture was discarded, the biofilms washed twice with 0.9% NaCl and then suspended in the same solution by scraping the cells from the plate bottom. All the procedure was performed on ice. Three independent biofilms were pooled to reduce biological variability (Sousa et al., 2014) and immediately centrifuged at 16000 g for 10 min at 4°C. For RNA isolation from planktonic cultures, 1 mL of culture was collected and immediately centrifuged at 16000 g for 10 min at 4°C. Of note, bacterial cells were suspended in 0.9% NaCl before RNA isolation since we have previously determined that mRNA quantification was similar to when using RNA preserving solutions if processed immediately (Supplementary Figure 1). The extraction of RNA from both suspensions was then performed using the kit ExtractMe RNA Bacteria & Yeast (Blirt S.A., Gdansk, Poland) as previously optimized (Franca et al., 2012). In brief, bacterial pellets were suspended in 600 μL of RYBL buffer, transferred into 2 mL tubes containing 0.5 g of acid-washed silica beads (150-212 mm) (Sigma-Aldrich, USA) and the cells were lysed using a BeadBug 6 Microtube Homogenizer (Benchmark Scientific, NJ, USA) for 35 s at~4.5 rpm. Subsequently, samples were incubated on ice for 5 min and the cell disruption and cooling steps repeated three more times. Afterwards, samples were centrifuged at 16000 g for 1 min at 4°C, the supernatants transferred into 2 mL RNase-free tubes and mixed with an equal volume of 70% ethanol. The subsequent steps were performed according to the manufacturer's instructions. RNA samples were then treated with DNase I (Thermo Fisher Scientific Inc, MA, USA) to degrade contaminating genomic DNA. RNA concentration and purity (A 260/A280 and A 260/A230 ) were determined by NanoDrop One (Thermo Fisher Scientific Inc) and RNA integrity was inferred by visualization of the 23S/16S rRNA banding pattern using a 1% non-denaturing agarose gel. Complementary (c) DNA Synthesis Total RNA concentration was adjusted to 250 ng in all samples and then reverse transcribed, in a 10 μL reaction volume, using RevertAid H minus Reverse Transcriptase enzyme (M-Mulv RT, Thermo Fisher Scientific, Inc.) and random primers (Bioron, Römerberg, Germany) as priming strategy. The synthesis was performed following the manufacturer's instructions. A control lacking the reverse transcriptase enzyme (no-RT control) was prepared to later determine the level of genomic DNA contamination. qPCR The primers used for qPCR were designed with the support of Primer3 software (Untergasser et al., 2012) and using S. epidermidis RP62A (for 16S rRNA primers) or 1457 complete genome as a template (NCBI accession no. CP000029.1 and CP020463.1, respectively) (Supplementary Table 1). qPCR analysis was prepared in a 10 μL reaction containing 2 μL of diluted cDNA or no-RT control (1:400), 5 μL of Xpert Fast SYBR Mastermix (GRiSP, Lda., Porto, Portugal), 0.5 μL of each forward and reverse primers (0.5 μM per reaction), and 2 μL of water. qPCR run was performed in a CFX96 (Bio-Rad, CA, USA), with the following cycle parameters: 95°C for 2 min, and 40 cycles of 95°C for 5 s, 60°C for 30 s. A no-template control was included to assess reagent contamination and a melting curve analysis was performed to ensure the absence of unspecific products and primer dimers. Reaction efficiency was assessed at 60°C by performing 10-fold dilution series of the cDNA samples and determined from the slope of a standard curve. The expression of the genes tested was normalised to the expression of the reference genes 16S rRNA and gyrB using a variation of the LivaK method, according to Eq. (1), where E stands for the reaction efficiency. To simplify the analysis between strains and conditions, the results are represented as the ratio of TSB 1%+Mg /TSB 1%G (Fold-change IND/PRE). E DCt = E Ct geometric mean of 16S rRNA=gyrB −Ct target gene Eq: (1) Statistical Analysis Statistical differences between conditions were determined using either unpaired T-test with Welch's correction or One-way ANOVA with Tukey's multiple comparisons test, using GraphPad Prism version 7 (Trial version, CA, USA). A p-value less than 0.05 was considered significant. At least three independent experiments were performed for each assay presented. Validation of VBNC State Induction Model in S. epidermidis 1457 Biofilms To assess if the previously described VBNC induction model could be used in strain 1457, we first compared the total amount of cells (assessed by optical density), the number of living cells (assessed by flow cytometry) and the number of culturable cells (assessed by CFU quantification). For biofilms with an equivalent amount of total and live cells, the induction of the VBNC state resulted in a significantly lower number of cultivable cells, confirming that the formation of VBNC cells in stain 1457 can be induced (Figure 1). Additionally, no differences were found regarding the pH of the media (data not shown), confirming that VBNC state prevention with MgCl 2 did not affect the pH of the culture, as previously shown (Cerca et al., 2011a). To better correlate our data with previously published results, we compared the level of VBNC cells in strain 1457 with the reference strains COM040A and PT11004, whose significant ability to enter into a VBNC state was formerly confirmed (Carvalhais et al., 2018). As we showed above that flow cells (OD) and total culturable cells (CFU). Although the 70% decrease in culturability in strain 1457 was lower than the one found in the reference isolates (≈ 90%) (Supplementary Figure 2), it is still within the range previously considered relevant in the context of VBNC state induction (Carvalhais et al., 2018). Validation of RNA-Seq Results by qPCR Aiming to determine possible candidates for future mutagenesis studies, the expression of the genes highlighted in a former RNA-Seq analysis of S. epidermidis 9142 biofilms under inducing VBNC conditions was validated by qPCR ( Figure 2). The first step was to compare the results obtained by RNA-Seq and qPCR for biofilms of strain 9142. Although a higher fold-change was observed in qPCR results, the ratios were not significantly different from the ones obtained by RNA-Seq, confirming the results previously obtained (codY, 1.5 ± 0.4; mazE, not applicable; mazF, 1.0 ± 0.6 and pdhA, 1.7 ± 0.2) . Secondly, we assessed the expression of the selected genes in biofilms of the reference isolates, as well as in strain 1457. Not surprisingly, strain-to-strain variability was observed. Nevertheless, qPCR data confirmed that all tested genes were upregulated under VBNC inducing conditions in the reference isolates. Interestingly, although in strain 1457 the expression of the genes codY and pdhA was significantly increased in the induced VBNC state, the expression of the mazEF complex was not significantly affected. Applicability of the VBNC State Induction Model in Planktonic Populations After validating the applicability of the VBNC state induction model in biofilms formed by strain 1457, we became interested in determining if this experimental model was also applicable to planktonic cultures, something yet undetermined. As observed in biofilms, a significant decrease in the number of culturable cells was also found under VBNC inducing conditions for all isolates tested, whereas the total amount of planktonic cells was identical in both induced and prevented conditions (≈100%) ( Figure 3A). Interestingly, for strain 1457, the induction of VBNC cells in planktonic cultures reached 48%, while in biofilms it reached 70%. However, these results are not directly comparable since growth conditions and incubation time differed: while VBNC cells induction in biofilms cultures was initiated in a preestablished 24 h-old biofilm, in planktonic cultures the VBNC state was induced from the start of the incubation period. Thus, to better mimic the biofilm experimental setup, another experiment was conducted, where planktonic cultures were first allowed to grow for 24 h, followed by another 24 h of growth in TSB with 1% glucose or 1% glucose plus 20 mM MgCl 2 . Notably, with longer incubation periods the proportion of VBNC cells in planktonic cultures ( Figure 3B) reached similar levels to what was previously observed in biofilms (Supplementary Figure 2). Subsequently, we aimed to understand if the genes previously identified as potentially involved in the regulation of VBNC cells formation in biofilms could also play a role in planktonic cells. As can be seen in Figure 4, all genes were upregulated under the VBNC inducing conditions, with codY and pdhA reaching statistical significance. Interestingly, when comparing the foldchange IND/PREV between planktonic and biofilm cells (Figures 2, 4), the fold-change IND/PRE of the codY and pdhA was higher in planktonic cells. Although the expression of both mazE and mazF genes seemed to be more pronounced when the VBNC state was induced in planktonic cultures, this difference was not statistically significant. DISCUSSION The induction of VBNC cells in S. epidermidis biofilms was previously reported in a wide range of clinical and commensal isolates, however, it was also found that not all strains formed VBNC cells under our in vitro VBNC state induction model (Cerca et al., 2011a;Carvalhais et al., 2018). Until now, the ability of S. epidermidis 1457, a strain widely used in genetic manipulation studies, to form VBNC cells using the previously developed model was unknown. Our results showed that the supplementation of the culture medium with 1% glucose induced a reduction in 1457 biofilm cells culturability of about 70% when compared to biofilms grown in media supplemented with 1% glucose plus 20 mM MgCl 2 . Earlier, it was shown that the induction of VBNC cells in S. epidermidis biofilms led to modifications in cells transcriptomic and proteomic profiles. Carvalhais et al. reported, using an RNA-Seq approach, the upregulation of the genes codY, mazE, mazF and pdhA in S. epidermidis 9142 biofilms with higher proportions of VBNC cells, which indicated a potential involvement of these genes in the emergence of the VBNC state . Even though RNA-Seq is a powerful technique to assess gene transcription, it is important to validate the results through an alternative method, being qPCR still considered the gold standard for gene expression quantification assays. Therefore, using qPCR, we first validated, in biofilm cultures, the RNA-Seq data previously obtained with strain 9142, and then assessed the expression of the genes of interest in strain 1457, as well as in the reference strains PT11004 and COM040A. Interestingly, using qPCR we were able to detect the expression of the gene mazE in both induced and prevented VBNC conditions, while in RNA-Seq analysis mazE was only detected in one of the conditions . Additionally, mazE and mazF expression in strains 9142 and PT11004 was notably upregulated when the VBNC state was induced, but not in the target strain 1457, suggesting that mazEF might not have a key impact on VBNC state induction in this strain. On the contrary, codY and pdhA expression was noticeably upregulated in all strains tested, including 1457, which supports the hypothesis that these genes may be involved in the induction of VBNC cells. Unlike PdhA, the A B function of CodY has already been characterized in other strains. CodY is a repressor of hundreds of genes implicated in the transition from the exponential to the stationary growth phase, i.e., when nutrients become limited (Joseph et al., 2005;Barbieri et al., 2015). Additionally, it seems to regulate the agr quorumsensing system, as well as genes associated with biofilm formation in Staphylococcus aureus (Majerczyk et al., 2008). Although the onset of VBNC cells is commonly associated with the biofilm phenotype, non-culturable cells have also been reported in planktonic populations of several species, such as S. aureus and Escherichia coli (Xu et al., 2018). However, to our knowledge, no reports of VBNC cells in planktonic populations were found for S. epidermidis. As such, we aimed to understand if our VBNC state induction model could be applied to planktonic populations. As observed in biofilms, the decrease of culturability was more pronounced in the reference isolates, however, a significant proportion of VBNC cells was also obtained in planktonic populations of S. epidermidis 1457. Though, VBNC induction was significantly higher when planktonic populations were grown under biofilm-mimicking conditions (48 h of total growth), showing that the experimental setup with 24 h-old planktonic populations was not the most appropriate for the comparison with biofilms. Based on these results, we hypothesized that the lower proportion of VBNC cells in 24 h planktonic cultures may be, in part, related to the lower starting cell density, as in 24 h-old planktonic cultures the induction of the VBNC state started with ≈ 10 7 CFU/mL, whereas in 48 h-old planktonic started with ≈ 10 9 CFU/mL. The higher amount of VBNC cells attained in 48 h-old planktonic cultures raised our interest in understanding if the genes identified as playing a role in the emergence of VBNC cells in biofilms could also be involved in planktonic cultures. The analysis of gene expression showed that codY and pdhA were also upregulated in planktonic cultures under VBNC state inducing conditions. Although the exact function of codY and pdhA in the emergence of VBNC cells is still unknown, codY is responsible for the repression of genes when the cell is under unfavourable conditions, such as nutritional depletion and environmental stresses (Barbieri et al., 2015;Waters et al., 2016). This seems to be related to its upregulation when the VBNC condition is induced since the entrance of bacteria into a non-culturable state creates stress that leads to physiological and metabolic changes. On the other hand, although the role of pdhA has not yet been studied in Staphylococcus spp., the product of this genepyruvate dehydrogenase, has been related to the regulation of metabolism and perturbations on the cell membrane (Zhang et al., 2014;Singh et al., 2018). Taken together, our data provide evidence that VBNC cells in S. epidermidis strain 1457 can be generated in vitro, both in biofilm and planktonic cultures. Additionally, this study reinforced the potential involvement of the genes codY and pdhA in VBNC cells formation in biofilm and revealed, for the first time, the potential involvement of these genes in VBNC cells formation in planktonic cultures. The role of both genes is now being assessed in our laboratory with knockout strains of S. epidermidis 1457. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Files, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS Conceptualization, NC and AF. Investigation, VG and NL. Writing original draft, VG and NL. Writing-review and editing, NC and AF. Supervision, NC and AF. All authors contributed to the article and approved the submitted version.
5,692.4
2021-11-15T00:00:00.000
[ "Biology" ]
Sixth Order Numerov-Type Methods with Coefficients Trained to Perform Best on Problems with Oscillating Solutions : Numerov-type methods using four stages per step and sharing sixth algebraic order are considered. The coefficients of such methods are depended on two free parameters. For addressing problems with oscillatory solutions, we traditionally try to satisfy some specific properties such as reduce the phase-lag error, extend the interval of periodicity or even nullify the amplification. All of these latter properties come from a test problem that poses as a solution to an ideal trigonometric orbit. Here, we propose the training of the coefficients of the selected family of methods in a wide set of relevant problems. After performing this training using the differential evolution technique, we arrive at a certain method that outperforms the other ones from this family in an even wider set of oscillatory problems. dealing with the P-stability characteristic, which is important for addressing stiff oscillatory problems. Chawla [4] presented the following modified Numerov scheme, which had the advantage of being evaluated explicitly: with h a steplength that remains constant through the integration of Equation (1): The vectors z k−1 , z k and z k+1 are approximating z(t k − h), z(t k ) and z(t k + h), respectively, while v 1 ∈ R m , v 2 ∈ R m and v 3 ∈ R m are the function evaluations used by the method. We utilize the known information at mesh according to Since we have already computed f (t k−1 , v 1 ) in the previous step, we need only to evaluate f (t k+1 , v 3 ) and f (t k , v 2 ) every step, and consequently, we spend only two stages per step. Tsitouras then suggested a Runge-Kutta-Nyström (RKN)-style method [5]. This technique significantly lowered the cost. As a result, just four steps are required to create a sixth-order method, whereas previous implementations required six function evaluations (see [6]). In the years that followed, our group delved thoroughly into the issue. Tsitouras developed eighth-order methods with nine steps per step in [7]. Ninth-order methods were studied in [8]. Simultaneously, a group of Spanish researchers published some highly interesting work on the same topic [9][10][11]. In the present work, we intend to present a new method for addressing the problems with periodic solutions better. Traditionally, for achieving this, we try to fulfill various properties coming from a simple test equation. The main novelty here is that we will train the available free parameters in a wide set of relevant problems. For this training, we will use the differential evolution technique. It is believed that by using this methodology, we will conclude with a method better tuned for oscillatory problems. Theory of Two-Step Hybrid Numerov-Type Methods For numerically addressing Equation (1), higher algebraic order methods are in great demand. We may express t, which is the independent variable, as one of the components of z. As a result, we concentrate without losing generality on the autonomous system z = f (z). Then, an s-stages hybrid Numerov method may presented as [7]: with I s ∈ R s×s the identity matrix, D ∈ R s×s , w T ∈ R s , a ∈ R s the coefficient matrices of the method and For the presentation of the coefficients, we make use of the Butcher tableau [12,13], a D w . Method (2) can be given using matrices [8]. Since the function evaluations are computed sequentially, these methods are explicit. Thus, D is strictly lower triangular matrix. When s = 5, the associated matrices take the following form: T . Since f (v 1 ) is known from the previous step, four function evaluations are evaluated each step. For attaining sixth algebraic order, we must cancel the associated truncation error terms (see [14]). Seventeen parameters are shared by the scheme under examination. Namely, nine entries from the matrix D (i.e., d 31 , d 32 , · · · , d 54 ), five coefficients for vector w and 3 coefficients for vector a. However, in order to obtain 6th order, we must solve 23 condition equations (see Table 5 in [14]). The parameters are less than the equations. This is a usual problem while developing Runge-Kutta type methods. Using simplifying assumptions is a common way to get around this issue. We proceed setting, Then we spend only the six parameters d 31 , d 32 , d 41 , d 42 , d 51 and d 52 to satisfy the above assumptions. Our profit is that all order conditions, including D · 1 and D · a, are discarded from the relevant list given in [14]. As a result, only 9 order conditions remain to be satisfied by the remaining 11 coefficients. We select a 3 and a 4 as free parameters. The remainder of the coefficients are computed successively below through a Mathematica [15] listing presented in Figure 1. For exhaustive information on the derivation of truncation error coefficients, see the review in [14]. Through its link with the so-called T2 rooted trees, Coleman [16] advocated using the B2 series representation of the local truncation error. A first method from this family was given by Tsitouras [5]. We may write in Mathematica the following lines and derive the method given in there. In Thus, we verify the efficiency of the algorithm since almost 0.01 seconds are enough for furnishing the coefficients in a Ryzen 9 3900X processor running at 3.79 GHz. Later, Franco [9] chose a 3 = − 1 5 , a 4 = − 2 5 . These were all-purpose methods. In [17], we proposed another approach for selecting a 3 and a 4 that concentrates on the method's behavior in Keplerian type orbits. There we concluded that the choice a 3 = 3 44 , a 4 = − 23 38 furnishes a method that best address the latter type of problems. Performance of Methods in a Wide Set of Problems with Oscillating Solutions From the above-mentioned family, we intend to develop a particular hybrid Numerovtype scheme. The resulting method has to perform best on problems with oscillating solutions. For this reason, we have chosen to test the following problems. The Bessel equation The wellknown Bessel equation is verified by a theoretical solution of the form, with J 0 the zeroth-order Bessel function of the first kind. This equation in also integrated in the interval [0, 10π]. 8. The Duffing equation Next, we choose the equation with an approximate analytical solution given in [14], The three methods F6 [9], M6 [17] and T6 [5] were run for the above problems and for different numbers for steps. The results in [5,9,17] showed the superiority of the latter methods over the older schemes. The global errors over the whole mesh was recorded in Table 1. Actually, we presented the errors in the form of the accurate digits observed. A final row with the mean value is also given in Table 1. In these 8 problems and for the 32 runs carried, it seems that T6 performed best. The question raised now is if we can do even better. Phase-Lag and Amplification Errors At first, we select a method of high phase-lag order. This means that we try to reduce the gap in the angle among the numerical and the theoretical solution in a free oscillator [18]. The latter approach is well suited for use in problems with periodic solutions. Thus, after considering the test problem z = −λz, and applying Method (3), we verify as phase-lag the expression: A sixth-order method shares sixth phase-lag order. Then after expanding with respect to τ = hλ, we obtain ρ = ρ 8 τ 8 + ρ 10 τ 10 + · · · The equations for eighth and tenth phase-lag order are , and , respectively. The only acceptable solution of ρ 8 = ρ 10 = 0 is a 3 = 16 15 and a 4 = 1371 245 . However, we can not use such coefficients being so far away from the interval of interest [−1, 1]. Thus, we may draw back and accept only ρ 8 = 0 by setting We name this method PL8. Another choice is the elimination of amplification errors. This is the distance from the orbit of the theoretical solution of a free oscillator. It is given as Expanding with respect to τ, we conclude to the exact form, Unfortunately, we may not satisfy simultaneously σ ≡ 1 and ρ 8 = 0 since we arrive at coefficients with indeterminate values. Thus, we may admit only σ ≡ 1 by setting a 3 = − 1 2 and a 4 = 7 11 . We name this method σ1. Another interesting property is P-stability [2,3]. Then, we have to satisfy σ ≡ 1 along Only implicit methods may address these two requirements simultaneously. Training the Free Parameters in a Wide Set of Periodic Problems Our current project's initial concept is based on [19]. After choosing the free parameters a 3 , a 4 , we obtain a method named NEW6 and form another column in Table 1 for it. The average value r obtained after the 32 runs may serve as a fitness measure and meant to be maximized. For the maximization process, we applied the differential evolution technique [20]. DE is an iterative procedure, and in every iteration, named generation g, we work with a "population" of individuals a , i = 1, 2, · · · , N, with N the population size. An initial population a 4i , i = 1, 2, · · · , N is randomly created in the first step of the method. We have also set the measure r as the fitness function, i.e., the average of accurate digits after the 32 runs mentioned above. The fitness function is then evaluated for each individual in the initial population. In each generation (iteration) g, a three-phases sequential scheme updates all of the individuals involved. These phases are Differentiation, Crossover and Selection. We used MATLAB [21] software DeMat [22] for implementing the latter technique. Indeed, we manage to produce an improvement by choosing: The coefficients of the new method in matrix forms are given below, which are suitable for double precision computations. For this method, we obtained r ≈ 7.75, which is a very impressive result. Actually, we obtained many methods with r > 7.7 since there seems to exist a small area of pairs a 3 , a 4 , where r attains high values. We also remark that for Selection (4), the amplification is σ = 1 and for phase lag ρ = O(v 8 ) holds, i.e., ρ 8 = 0, and no special property is satisfied. We run the methods constructed for addressing periodic problems in the eight problems listed in the third section. We summarize the results in Table 2. It is clear from this Table that NEW6 performs better than all methods referred until now. Namely, F6, M6, T6, PL8 and σ1. Other authors have also tried recently to train coefficients of RK methods [23]. However, in that later paper, only second-and third-order methods are considered [24,25] with constant step sizes and over single problems (e.g., Van der Pol). The learning al-gorithm given there remains to be tested on current and stiffer cases. Our proposal for differential evolution comes after several papers through the years [19]. Numerical Results Method NEW6 was produced to perform best on problems 1-8 listed in Section 3. In the tests recorded in Tables 1 and 2, it was meant to outperform other methods for the intervals and steps used there. Thus, we intend to test NEW6 in a different set of problems, intervals and number of steps. In this direction, we run again problems 1-8 to the longer interval [0, 20π]. We name these problems now 1a, 1b, · · · , 8a. In addition, we included two nonlinear problems more. 10. Two coupled oscillators with different frequencies. The problem is characterized by the equations [27], We also integrated this problem into [0, 20π], but no analytical solution is available. For an estimation of the error in the grid points, we used a Runge-Kutta-Nyström method [28] with very stringent tolerance. Finally, we consider the linearized wave equation, which is a rather large-scale problem [14], with the theoretical solution We semi-discretisize ϑ 2 u ϑx 2 with fourth order symmetric differences at internal points and one sided differences of the same order at the boundaries (including the knowledge of ϑu ϑx there) and conclude with the system: By choosing ∆x = 5, we arrive at a constant coefficients system with N = 20. The results for this problem were dominated by the semi-discretization errors. We run these 11 problems for various numbers of steps and tabulated the results in Table 3. There, we included results with other state-of-the-art methods considered in the area of sixth-order Numerov-type (i.e., including off step points) methods. It is obvious from there that NEW6 outperformed all other methods from the literature by a considerable distance. Conclusions The main points of our research was the following. • We considered a family of a sixth-order hybrid two-step scheme that shares the lowest number of stages, and the main novelty is suggesting a method for selecting proper free parameters. • The parameters of the new method were chosen after testing their performance in a large set of periodic problems. • The best choice was found using the differential evolution method. In a wide range of problems with oscillating solutions, the developed scheme significantly outperformed other methods from the same or other families. • The presented method is tuned for problems with periodic solutions,F especially when these problems share a large linear part.
3,225.6
2021-10-29T00:00:00.000
[ "Mathematics" ]
MiR-449b-5p targets lncRNA PSMG3-AS1 to suppress cancer cell proliferation in lung adenocarcinoma Background PSMG3-AS1 has been characterized as an oncogenic lncRNA in breast cancer, while its role in other cancers is unknown. This study investigated the role of PSMG3-AS1 in lung adenocarcinoma (LUAD). Methods This study included 64 LUAD patients (42 males and 22 females) who were enrolled between May 2012 and May 2014. RT-qPCR was used to evaluate the expression levels of lncRNA. Cell proliferation analysis was performed using CCK-8 kit. Results We found that upregulation of PSMG3-AS1 in LUAD predicted the poor survival of patients. MiR-449b-5p is downregulated in LUAD and the expression levels of LUAD were inversely correlated with the expression levels of PSMG3-AS1. MiR-449b-5p was predicted to target PSMG3-AS1, and overexpression of miR-449b-5p resulted in the downregulation of PSMG3-AS1 in LUAD cells. Cell proliferation analysis showed that overexpression of PSMG3-AS1 resulted in increased rate of cell proliferation. Overexpression of miR-449b-5p reduced the enhancing effects of PSMG3-AS1 on cell proliferation. Conclusions Therefore, miR-449b-5p may target PSMG3-AS1 in LUAD to suppress cancer cell proliferation. Background Lung cancer is considered as the leading cause of cancer-related mortality for decades worldwide [1]. The latest GLOBOCAN statistics reported that lung cancer caused 1,761,007 deaths, accounting for 18.4% of all cancer deaths in 2018 [2]. In the same year, there were 2, 093,876 new cases of lung cancer, accounting for 11.6% of all new cancer cases [2]. It is estimated that more than 50% of lung cancer patients diagnosed with a localized tumor can live longer than 5 years, but only 16% of lung cancer patients are diagnosed at early stages [3]. Once distant metastasis occurs, only 6% of lung cancer patients can survive for 5 years [4]. Tobacco smoking is responsible for the majority of lung cancer cases, while this disease also affects never-smokers, indicating the complicated pathogenesis [5,6]. Studies on the molecular mechanisms of lung cancer have identified critical molecular pathways involved in the development and progression of this disease [7,8]. Characterization of lncRNAs involved in lung cancer provides novel insights into the development of targeted therapies [9,10]. Noncoding RNAs (ncRNAs), such as long (> 200 nt) ncRNAs (lncRNAs) [11] and miRNAs [12] are critical players in cancer biology and either promote or suppress cancer development by regulating the expression of cancerrelated genes. Therefore, regulating the expression of cancer-related ncRNAs may benefit cancer treatment. However, the functions of most ncRNAs in cancer remain unclear. PSMG3-AS1 was recently characterized as an oncogenic lncRNA in breast cancer [13], while its role in lung cancer remains unclear. Our bioinformatics analysis showed that PSMG3-AS1 may be targeted by miR-449b-5p, which is a tumor suppressive miRNA [14]. This study was therefore carried out to investigate the interaction between miR-449b-5p and PSMG3-AS1 in lung adenocarcinoma, a major subtype of lung cancer. LUAD patients and tissue samples This study was approved by the Ethics Committee of the 3rd Affiliated Teaching Hospital of Xinjiang Medical University. This study included 64 LUAD patients (42 males and 22 females) who were enrolled at the aforementioned hospital between May 2012 and May 2014. All LUAD patients were newly diagnosed cases and cases with other severe clinical disorders, such as other malignancies and chronic diseases, were excluded. No therapy was initiated before this study. The age of patients ranged from 42 to 66 years old, with a mean age of 54.2 ± 6.7 years old. Among the 64 patients, 51 were smokers or had a history of smoking. All patients signed the written informed consent. Paired LUAD and nontumor tissues were collected from each patient through fine needle aspiration (FNA). All tissue specimens were confirmed by histopathological exams. All tissue samples were stored in liquid nitrogen before use. A 5-year follow-up The 64 patients were staged according to AJCC criteria (8th edition). There were 8, 11, 21, and 24 cases of stage I, stage II, stage III and stage IV, respectively. Based on clinical stage, patients were either treated with chemotherapy, surgical resection, radiotherapy, targeted therapy, or the combination of these treatments. All patients were followed up for 5 years after admission to record their survival. All patients completed the follow-up. RNA preparations RNAzol (Sigma-Aldrich) was used to isolate total RNA from tissue samples and in vitro cultivated cells. To harvest miRNAs, RNA precipitation and washing were performed using 85% ethanol. The gDNA eraser (Takara Bio) was used to remove genomic DNA from all RNA samples. NanoDrop 2000 Spectrophotometer (Thermo Scientific) was used to measure RNA concentrations. RNA integrity was checked by Urea-PAGE gel. Statistical analysis Mean ± SEM values were used to express data from 3 independent replicates. All statistical analyses were performed using GraphPad Prism6 software (GraphPad, USA). The comparisons between non-tumor and LUAD tissues were performed by paired t test. Comparisons among multiple groups were performed by ANOVA (one-way) and Tukey test. Linear regression was used for correlation analysis. The 64 patients were divided into high (n = 32) and low (n = 32) PSMG3-AS1 level groups with the mean value of PSMG3-AS1 expression Fig. 2 MiR-449b-5p is downregulated in LUAD and inversely correlated with PSMG3-AS1. Expression levels of miR-449b-5p in both LUAD and non-tumor tissues from the 64 patients with LUAD were also measured by performing RT-qPCR. PCR reactions were repeated 3 times and mean values were presented and compare (a). ***, p < 0.001. Correlations between expression levels of miR-449b-5p and PSMG3-AS1 across both LUAD (b) and non-tumor (c) tissues were analyzed by linear regression level in LUAD tissues used as a cutoff value. Survival curves were plotted based on the 5-year follow-up data and compared by log-rank test. Chi-squared test was performed to analyze the relationship between the expression levels of PSMG3-AS1 and patients' clinical data. p < 0.05 was considered as statistically significant. Upregulation of PSMG3-AS1 in LUAD predicted poor survival The expression levels of PSMG3-AS1 in both LUAD and non-tumor tissues from the 64 patients with LUAD were measured by RT-qPCR. Compared to non-tumor tissues, the expression levels of PSMG3-AS1 were significantly higher in LUAD tissues (Fig. 1a, p < 0.05). Survival curves for both high and low PSMG3-AS1 level groups were plotted. No significant differences in therapeutic treatments were found between high and low PSMG3-AS1 level groups. Compared to the low PSMG3-AS1 level group, patients in the high PSMG3-AS1 level group showed higher mortality rate within 5-year follow-up. Chi-squared test analysis showed that the expression levels of PSMG3-AS1 were not significantly correlated with patients' age, gender, clinical stages and smoking habit (Table 1). MiR-449b-5p is downregulated in LUAD and inversely correlated with PSMG3-AS1 The expression levels of miR-449b-5p in both LUAD and non-tumor tissues from the 64 patients with LUAD were also measured by RT-qPCR. Compared to non-tumor tissues, the expression levels of miR-449b-5p were significantly lower in LUAD tissues (Fig. 2a, p < 0.001). Correlation analysis showed that the expression levels of miR-449b-5p and PSMG3-AS1 were inversely and significantly correlated across both LUAD (Fig. 2b) and non-tumor (Fig. 2c) tissues. MiR-449b-5p targeted PSMG3-AS1 to suppress cancer cell proliferation The effects of overexpression of miR-449b-5p and PSMG3-AS1 on the proliferation of H522 and H23 cells were analyzed by performing a CCK-8 assay. Compared to the C group, overexpression of PSMG3-AS1 resulted (See figure on previous page.) Fig. 3 MiR-449b-5p targeted PSMG3-AS1 to downregulate its expression. The interaction between miR-449b-5p and PSMG3-AS1 was predicted using IntaRNA 2.0 (a). H522 and H23 cells were transfected with miR-449b-5p mimic, PSMG3-AS1 expression vector, or PSMG3-AS1, followed by the confirmation of miR-449b-5p and PSMG3-AS1 overexpression as well as PSMG3-AS1 siRNA silencing by RT-qPCR at 48 h post-transfection (b). The effects of overexpression of miR-449b-5p on PSMG3-AS1 (c) and the effects of overexpression of and silencing of PSMG3-AS1 on miR-449b-5p (d) were also analyzed by RT-qPCR at 48 h post-transfection. All experiments were repeated 3 times and mean values were presented and compared. *, p < 0.05 Fig. 4 MiR-449b-5p targeted PSMG3-AS1 to suppress cancer cell proliferation. The effects of overexpression of miR-449b-5p and PSMG3-AS1 on the proliferation of H522 and H23 cells were analyzed by performing CCK-8 assay. All experiments were repeated 3 times and mean values were presented and compared. *, p < 0.05 in an increased rate of cell proliferation. Overexpression of miR-449b-5p played an opposite role and reduced the enhancing effects of PSMG3-AS1 on cell proliferation (Fig. 4, p < 0.05). In addition, silencing of PSMG3-AS1 was also performed to further confirm the function of PSMG3-AS1 in regulating cell proliferation. Compared to the C group, silencing of PSMG3-AS1 resulted in a decreased proliferation of H522 and H23 cells (Fig. 5, p < 0.05). Discussion This study mainly investigated the interaction between miR-449b-5p and PSMG3-AS1 in LUAD. We found that the expression of miR-449b-5p and PSMG3-AS1 were altered in LUAD. In addition, miR-449b-5p might be able to target PSMG3-AS1 to suppress cancer cell proliferation. The functionality of PSMG3-AS1 has only been investigated in breast cancer [13]. It is observed that PSMG3-AS1 is overexpressed in breast cancer and may promote the migration and proliferation of cancer cells by sponging miR-143-3p [13]. To the best of our knowledge, this study is the first to report the upregulation of PSMG3-AS1 in LUAD, a major subtype of lung cancer. Our in vitro cell experiments also showed that PSMG3-AS1 could promote the proliferation of LUAD cells. Therefore, our data suggested that PSMG3-AS1 played an oncogenic role in LUAD. Based on our knowledge, the prognostic value of PSMG3-AS1 in cancers remains unknown. Our 5-year follow-up study showed that high expression levels of PSMG3-AS1 measured before therapy were closely correlated with the poor survival of LUAD patients. Most lung cancer patients are diagnosed at advanced stages and the survival is generally poor [16]. Due the lack of early diagnostic markers, the low early diagnostic rate of lung cancer is unlikely to be increased in the near future. Therefore, as an alternative approach, accurate prognosis might help to determine therapeutic approaches and the development of personalized care program, thereby improving the overall survival. MiR-449b-5p has been characterized as a tumor suppressive miRNA in multiple cancers, such as breast cancer and osteosarcoma [14,17]. In these cancers, miR-449b-5p targets multiple protein-coding genes, such as CREPT and c-Met, to suppress cancer development and progression [14,17]. This study is the first to show the downregulation of miR-449b-5p in LUAD and its inhibitory effects on LUAD cell proliferation. Therefore, miR-449b-5p is also a tumor suppressive miRNA in LUAD. Interesting, we proved that miR-449b-5p could target PSMG3-AS1. Therefore, besides protein-coding genes, miR-449b-5p can also target lncRNA to participate in cancer biology. Abbreviations LUAD: Lung adenocarcinoma; lncRNAs: Long non-coding RNAs; ncRNAs: Non-coding RNAs Ethics approval and consent to participate This study was approved by the Ethics Committee of the 3rd Affiliated Teaching Hospital of Xinjiang Medical University. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee. Written informed consent was obtained from all individual participants included in the study. Consent for publication Not applicable.
2,556.4
2020-05-29T00:00:00.000
[ "Medicine", "Biology" ]
Classification of irregular free boundary points for non-divergence type equations with discontinuous coefficients We provide an integral estimate for a non-divergence (non-variational) form second order elliptic equation $a_{ij}u_{ij}=u^p$, $u\ge 0$, $p\in[0, 1)$, with bounded discontinuous coefficients $a_{ij}$ having small BMO norm. We consider the simplest discontinuity of the form~$x\otimes x|x|^{-2}$ at the origin. As an application we show that the free boundary corresponding to the obstacle problem (i.e. when~$p=0$) cannot be smooth at the points of discontinuity of~$a_{ij}(x)$. To implement our construction, an integral estimate and a scale invariance will provide the homogeneity of the blow-up sequences, which then can be classified using ODE arguments. Introduction In this paper we consider the free boundary problem with p ∈ (0, 1). We will also deal with the case p = 0 using the notation that identifies v to the power zero with the characteristic function χ {v>0} . Problems of this type often arise in real world phenomena. For instance, in the study of the spread of biological populations one studies the problem (1.2) div(a∇(u m )) + f (x)u + b · ∇(u m ) = 0 where u : R n → [0, +∞) represents the density of the population, a : R n → Mat(n × n) and b : R n → R n represents a drift term. Here, m > 1, a(x) is a positive definite matrix (with entries a ij (x)) and f : R n → R takes into account the influence of the environment on the population, see [S83]. It is convenient to reformulate the problem in terms of the auxiliary function v := u m and write (1.2) as div(a∇v) + f (x)v 1 m + b · ∇v = 0. Notice that this boils down to the equation in (1.1) when m = 1/p, f ≡ −1 and b = (b 1 , . . . , b n ) with b i = ∂ j a ij . The case in which a ij is the identity matrix reduces of course to that of the Laplacian, and, in general, a non-constant a ij models a heterogeneous medium in which the speed of diffusion is different from one point to another. Moreover, equations in non-divergence form arise naturally from probabilistic considerations, for instance, as the infinitesimal generators of anisotropic random walks, see e.g. Section 2.1.3 in [C08]. Furthermore, when a in (1.1) is the identity matrix, the problem is related to the singular one in [AP86], and as p → 0 it recovers the exemplary free boundary problem in [C77]. One of the main distinctions in the field of partial differential equations consists in the difference between equations "in divergence form" and those "in non-divergence form". While the first ones naturally admit a variational formulation and can be dealt with by energy methods, the second ones usually require differentand perhaps more sophisticated -techniques (see e.g. [T82] for a detailed discussion), often in combination with viscosity methods. We refer to [K07, C08] and the references therein for throughout presentations of similarities and differences between equations in divergence and non-divergence form. A similar distinction between divergence and non-divergence structure occurs in the field of free boundary problems. As a matter of fact, free boundary problems whose partial differential equation is in divergence form often enjoy a special feature given by the so-called "monotonicity formulas": namely, the energy functional, or a suitable variational integral, possesses a natural monotonicity property with respect to some geometric quantity (typically, a functional defined on balls of radius r turns out to be monotone in r). This type of monotonicity property is, in a sense, geometrically motivated, since it may be seen somehow as an offspring of classical monotonicity formulas arising in the theory of minimal surfaces and geometric flows. In addition, combined with the natural scaling of the problem, a monotonicity formula is often very useful in proving uniqueness of blow-up solutions, classification results and regularity theorems. Viceversa, problems which do not enjoy monotonicity formulas (or for which a monotonicity formula is not known) may turn out to be considerably harder to deal with, and proving (or disproving) a strong regularity theory is a natural, important and often very challenging question (see e.g. [CS05,PSU12] for further discussions on monotonicity formulas). The study of free boundary in discontinuous media is also a very active field of research in itself, see in particular [T16] for related problems involving a fully nonlinear dead-core problems, [ALT16] for dead-core problems driven by the infinity Laplacian, and [PT16] for cavity problems in rough media. See also [BT14] for a case in which the coefficients belong to the space of vanishing mean oscillation. Our objective in the present paper is to study the behavior of the solution v of (1.1) near the free boundary points x ∈ ∂{v > 0} at which the matrix a ij (x) is discontinuous. A model example of this sort in 2D is where ε is a small constant and p ∈ [0, 1) (here, we are using the standard notation We observe that the quadratic form a ij ξ i ξ j = |ξ| 2 + ε |x| 2 (x 1 ξ 2 ) 2 + (x 2 ξ 1 ) 2 − 2x 1 x 2 ξ 1 ξ 2 = |ξ| 2 + ε |x| 2 (x 1 ξ 2 − x 2 ξ 1 ) 2 is positive definite and a ij are discontinuous at the origin. More generally, we can assume that the diffusion matrix a has the form where h ij is a homogeneous function of degree zero and for any point x 0 ∈ R n we have that for some δ > 0. Roughly speaking, in (1.5), the terms b ij and h ij represent the continuous and the discontinuous parts of a ij , respectively. Throughout this paper we will assume that the operator satisfies the following conditions: (H1) the entries of the matrix a lm are bounded measurable functions, and the matrix is uniformly elliptic, i.e. there exist two positive constants λ and Λ such that where δ(R) > 0 is a small constant. (H3) the matrix a ij has at least one discontinuity at x 0 ∈ R n such that a ij (x) is rotational invariant at x 0 and homogeneous of degree zero. In this setting, the problem in (1.1) admits a solution, as given by the following result: Then, there exists a nonnegative function v such that v − g ∈ W 2,q (B 1 ) ∩ W 1,q 0 (B 1 ), for some 1 < q < +∞, and v solves (1.1). From the technical point of view, concerning the assumptions on the coefficients a ij , we notice that the function x i x j |x| −2 ∈ V M O for any i and j. However, if ε is sufficiently small then (H2) holds with δ(R) ≤ Cε, where C is a dimensional constant. Consequently, we can apply the W 2,q estimates from Theorem 4.4 in [CFL93] to establish the existence and optimal growth of the solutions. As a matter of fact, setting we can bound the growth from the free boundary according to the following result (see also Theorem 2 in [T16]): Theorem 1.2. Let v ≥ 0 be a bounded weak solution of (1.1) in B 1 . Then there exists a constant M > 0, depending on v L ∞ (B1) , such that, for eachx ∈ B 1 2 ∩ ∂{v > 0} and any We remark that the problem in (1.1) has a natural scale invariance: for this, it is useful to define v r (x) := v(x 0 + rx) r β with β as in (1.6). We notice indeed that v r is also a solution of (1.1). We will show that, up to a subsequence, these blow-up functions approach a blow-up limit. We say that v is non-degenerate at x 0 ∈ ∂{v > 0} if there exists a sequence of positive numbers r k → 0 such that the corresponding blow-up limit is not identically zero. A cornerstone of our analysis is a uniform integral estimate. The result that we obtain is the following: and v is non-degenerate at 0. Then In this framework, the integral estimate in (1.7), combined with the scale invariance, implies that the blow-up limits are homogeneous, as described in the following result: Theorem 1.4. Let v be a strong solution of (1.1) in B 1 , with a ij as in (1.4). Assume that 0 ∈ ∂{v > 0} and v is non-degenerate at 0. Then any blow-up sequence at 0 has a converging subsequence such that the limit is a homogeneous function of degree β = 2 1−p . This result will in turn play a special role for the classification of global solutions. Roughly speaking, the homogeneity property, an appropriate use of polar coordinates and explicit methods borrowed from the theory of ordinary differential equations lead to a classification of solutions growing in a non-degenerate way from a smooth free boundary. This classification and the analysis of the blow-up limits will be the main ingredients for the analysis of irregular free boundary points, as explained in the following result (compare also with Corollary 6.8 in [BT14]): Theorem 1.5. Let n = 2, L be as in (1.1) and a ij as in (1.4), with |ε| sufficiently small. Let v be a solution of (1.1) in B 1 with p = 0. Assume that 0 ∈ ∂{v > 0} and that v is non-degenerate at 0. Then ∂{v > 0} cannot be differentiable at the origin. The paper is organized as follows: in Section 2 we establish the existence of a strong solution of (1.1) in the unit ball B 1 and thus prove Theorem 1.1. Next, using a dyadic scaling argument, we prove that a solution v(x) grows away from the free boundary ∂{v > 0} as [dist(x, ∂{v > 0})] β . This is contained in Section 3, which will provide the proof of Theorem 1.2. Our main technical tool, which is the uniform integral bound in Theorem 1.3, is established in Section 4. To this goal, we use some computations based on the ideas of Joel Spruck [S83]. Section 4 also contains the proof of Theorem 1.4, which fully relies on the integral estimate in (1.7). Finally, in Section 5 we show that the free boundary cannot be regular at the free boundary points where a ij suffers a discontinuity satisfying (H3), thus completing the proof of our main result in Theorem 1.5. Existence of solutions In this section, we give the proof of the existence result in Theorem 1.1. Proof of Theorem 1.1. The proof is based on a classical penalization argument. The case of the obstacle problem, corresponding to p = 0, is treated in [BT14]. Our proof is similar, but we will sketch it for the reader's convenience since unlike [BT14] our coefficients are not in VMO. In fact, for our case p ∈ (0, 1) the proof is shorter since for p > 0 the penalization function φ ε (see below) is continuous at the origin. Hence, by a customary compactness argument, we deduce that the limit of the penalized problem is a solution of (1.1) a.e. Therefore, we only need to establish uniform estimates for the penalized problem (2.5). The details of the proof go as follows. . Then η is a standard mollifier. Set a ij := a ij * η and g := g * η , where g is as in the statement of Theorem 1.1. Furthermore, let φ : R → R be a family of functions with the following properties Then, there exists a classical solution v to the following Dirichlet problem Now, for every t ∈ [0, 1], we consider the penalized problem Here, the subscript t is just a parameter, and does not denote the time derivative. We set (2.1) has a solution} and we claim that For any t ∈ [0, 1], we consider the operator A t u := a ij u ij − tφ (u). Then the Fréchet derivative of A t is Thus the derivative operator has the form since, by construction, φ is monotone increasing. Applying the Schauder theory in Chapter 6 of [GT98], we conclude that for any f ∈ C α and g ∈ C 2,α (B 1 ) there exists a solution w of This implies that DA t : To this aim, we first observe that, from the Sobolev embedding, we have that v t C 1,α v t W 2,q . Consequently, applying the Schauder estimates in Chapter 6 of [GT98], we obtain that v t C 4,α ≤ C( ), for some C( ) > 0, independently of t. Thus if I t k → t 0 then from Arzela-Ascoli theorem it follows that v t k → v t0 in C 4,α (B 1 ) and v t0 solves the corresponding problem (2.1), thus proving (2.4). Now, from (2.2) and (2.4), we deduce that a solution of (2.1) exists for all t ∈ [0, 1]. By Theorem 4.2 in [CFL93], we have that uniformly in because a ij verifies (H1)-(H3). Optimal growth from the free boundary We remark that if the inequality holds in some neighborhood of x 0 , for some constant C > 0 and β as in (1.6), then v r is uniformly bounded as r → 0. So, we show that the growth control in (3.1) is indeed satisfied for bounded solutions of (1.1). The result that we have is the following: Proposition 3.1. Let u ≥ 0 be a weak solution of (1.1) in B 1 such that for some constant M > 0. Then there exists a constant C > 0 such that for each x ∈ B 1 2 ∩ ∂{v > 0} there holds Remark 3.2. It is well known that the estimate in Proposition 3.1 implies the desired growth rate in (3.1). Proof of Proposition 3.1. We use a dyadic scaling argument. Suppose that the claim in Proposition 3.1 fails, then there exists a sequence of integers k i , and points x i ∈ B 1 2 ∩ ∂{v > 0} such that We introduce the scaled functions where S(·) is a short notation for S(·, x i ). Then, we have that and, from (3.3), Furthermore, setting r i := 2 −ki , by a direct computation we see that l,m Notice also that (3.3) and (1.6) yield that Consequently, recalling (3.6), we have that Let us define the sequence of matrices A i lm (x) := a lm (x i + r i x). Then A i (x) satisfies (H1). Observe that the change of variables ξ = x i + r i x implies Recalling that x ∈ B 1 2 , we see that implying that (H2) is also satisfied for the matrices A i . Furthermore, in light of (3.7), we see that u i solves the inequality From (3.6), (3.8) and (3.9) it follows that we can apply Theorem 4.1 in [CFL93] to conclude that for any q > 1 the following estimate holds uniformly in i where B ρ is a fixed ball but with arbitrary radius ρ > 0. Consequently, the sequence of strong solutions {u i } is bounded in W 2,q loc ∩ L ∞ . From Krylov-Safonov theorem it follows that for a subsequence, still denoted by u i , we have that u i → u in B 3 4 uniformly. Thus u i (0) = 0 and (3.5) translates to the limit function u, namely we have u(0) = 0, u(x) ≥ 0, sup On the other hand A i → A 0 a.e. and A 0 satisfies (H1)-(H3). In particular, A 0 lm u lm = 0 a.e. Hence, u(0) = 0 and the strong maximum principle imply that u ≡ 0 which is in contradiction with sup B 1 2 u = 1 and the proof is complete. From Proposition 3.1 and Remark 3.2 we obtain Theorem 1.2, as desired. Using this and the standard representation of the Laplacian in polar coordinates, the desired result follows. With this, we are in position of proving Theorem 1.3. Proof of Theorem 1.3. We let r := e −t and w(t, θ) := v(r,θ) r β . Then we have Plugging this into (4.3) we infer that This, after recalling that β − 2 = −pβ, yields that (4.5) where Next, we multiply both sides of equation (4.5) by ∂ t w and we integrate first over the unit circle and then in the interval [T 1 , T 2 ] to get that (4.6)ˆT 2 T1ˆS 1 . (4.7) Similarly, So, plugging this, (4.7) and (4.8) into (4.6), we obtain that Since ∂ t w = − ∂rv r β−1 + β v r β , the last inequality then readŝ where C depends only on the the constant M in the growth estimate v(x) ≤ M |x| β , see Theorem 1.2. Since T 1 and T 2 are arbitrary, by the change of variable r := e −t we obtain that This implies the desired result via polar coordinates. From Theorem 1.3, we obtain the homogeneity of the blow-up sequences, according to Theorem 1.4: Proof of Theorem 1.4. By (1.7), a change of variable x = ρy gives that where the notation in (4.1) has been used. This and (4.2) imply that and so |y| β−1 , for any y ∈ R n , which implies the desired result (see e.g. Lemma 4.2 in [DSV15]). 4.2. n-dimensional problems. For the sake of completeness, we consider now a multidimensional model. We take Notice that the hypotheses in (H1)-(H3) are satisfied for sufficiently small |ε|. We extend Theorem 1.4 to this case. To this aim, let us switch to polar coordinates and define x 1 = r cos θ 1 . . . x n = r sin θ 1 sin θ 2 . . . sin θ n , where 0 ≤ θ k ≤ π, with k = 1, . . . , n − 2, and −π ≤ θ n−1 ≤ π. In this setting, the analogue of Lemma 4.1 goes as follows: Lemma 4.2. Let L be as in (1.1), with a ij as in (4.9). Assume that x lies on the x 1 axis. Then (4.10) Hence, proceeding as in (4.5), and using θ = 0 to set the point on the x 1 axis, we get that which gives the desired result. In this setting, the analogue of Theorem 1.4 is the following: Let v be a strong solution of (1.1) in B 1 ⊂ R n with a ij as in (4.9). Assume that 0 ∈ ∂{v > 0} and v is non-degenerate at 0. Then any blow-up sequence at 0 has a converging subsequence such that the limit is a homogeneous function of degree β = 2 1−p . Proof. We use the change of variables r = e −t , (θ 1 , . . . , θ n−1 ) ∈ S n−1 , where S n−1 is the unit sphere in R n . Hence, for the function w(t, θ) = v(r,θ) r β , making use of (4.10), equation (1.1) can be rewritten as where ∆ θθ is the Laplace-Beltrami operator on the unit sphere. Thus, repeating the integration by parts as in the proof of Theorem 1.3 and the scaling argument in the proof of Theorem 1.4, the desired result follows. Global homogeneous solutions In this section, we would like to classify the global solutions of (1.1) in the plane in the homogeneous setting for the case of the obstacle problem. Theorem 5.1. Let n = 2, L be as in (1.1) and a ij as in (1.4). Let v be a solution of (1.1) in R 2 with p = 0 which is homogeneous of degree 2. Assume that 0 ∈ ∂{v > 0} and that ∂{v > 0} is differentiable at the origin. Then ε in a ij needs to be equal to 0 (and thus a ij = δ ij ). Proof. We first make a general calculation valid for all p ∈ [0, 1). Let v(x) = r β g(θ). We suppose (up to a rotation) that the arc (0, α) is a component of the positivity set of g. In this way, We let x 0 := (1, 0). From Remark 3.2, we know that (3.1) is satisfied, and thus there exists M > 0 such that . Remark 5.2. From (5.7), one can also construct a homogeneous solution v ≥ 0 of the obstacle problem Lv = 1 in {v > 0}, with L as in (1.1) and a ij as in (1.4), whose free boundary is a cone, namely, in polar coordinates, one can take v = v(r, θ) = r 2 g(θ), with g(θ) = a ε 1 − cos(ω ε θ) if θ ∈ 0, 2π ωε , 0 otherwise, where a ε := 1 2(2+ε) and ω ε := 2(2+ε) 1+ε < 2 when ε > 0 (respectively, ω ε := 2(2+ε) 1+ε > 2 when ε < 0), see Figure 1. Notice in particular, that the singular cone of the free boundary can be either obtuse or acute, according to the cases ε > 0 and ε < 0. Theorem 1.5 says that this example is somehow "typical", namely if the free boundary of (1.1) meets the discontinuity points of the coefficients a ij in a non-degenerate way, then a singularity occurs. The proof of this fact is based on Theorem 5.1, and the details go as follows: Proof of Theorem 1.5. Assume by contradiction that ∂{v > 0} can be written as a differentiable graph near the origin: say, up to a rotation, that {v > 0} coincides with {x 2 < ϕ(x 1 )} near the origin, with ϕ differentiable, ϕ(0) = 0 and ϕ (0) = 0. We consider the blow-up sequence v r k as in (4.1) (with x 0 = 0). From the discussion at the beginning of Section 4, we know that, for a suitable infinitesimal sequence r k , it holds that v r k approaches a global solution v 0 . Near the origin, we have that ∂{v r k > 0} coincides with x 2 < ϕ(r k x1) r k . Using this and the fact that ϕ(r k x 1 ) = o(r k x 1 ), we thus obtain that ∂{v 0 > 0} near the origin coincides with {x 2 < 0}. Also, from Theorem 1.4, we know that v 0 is homogeneous of degree 2. These considerations and Theorem 5.1 imply that ε = 0, against our assumptions.
5,345.2
2017-01-11T00:00:00.000
[ "Mathematics" ]
A 26-amino acid insertion domain defines a functional transcription switch motif in Pit-1beta. Pit-1, a pituitary-specific POU homeodomain transcription factor, specifies three anterior pituitary lineages; governs growth hormone, prolactin, and thyrotropin gene expression; and mediates basal and Ras-stimulated prolactin promoter activity in GH4 pituitary cells. Alternate splicing of the Pit-1 message produces the Pit-1β isoform, which contains a 26-amino acid insertion, the β-domain, within the amino-terminal transactivation domain. The β-domain functions as a molecular switch, such that Pit-1β blocks both basal and Ras-stimulated prolactin promoter activity in GH4 pituitary cells yet preferentially enhances protein kinase A-stimulated prolactin promoter activity in a HeLa reconstitution system. To determine whether the amino acid sequence of the β-domain dictates function, we replaced it with five different 26-amino acid sequences. These mutants fail to block basal or Ras-stimulated rat prolactin promoter activity and fail to optimally enhance the protein kinase A response of prolactin promoter. These data demonstrate that the amino acid sequence of the β-domain specifies its role as a molecular switch. Additionally, the presence of both Pit-1 and Pit-1β in pituitary cells allows diverse incoming signals to utilize structurally different forms of the same gene product, which can interact with distinct co-factors, integrating multiple signaling pathways at the level of the nucleus. Pit-1 is a pituitary-specific member of the POU homeodomain family of transcription factors, which includes the mammalian transcription factors Oct-1 and Oct-2, the Caenorhabditis elegans factor Unc-86, and at least 20 other transcription factors (1). Expression of Pit-1 is required for the normal growth and development of three anterior pituitary cell types, thyrotrophs, somatotrophs, and lactotrophs (2), as well as the proper expression of the anterior pituitary hormones prolactin (PRL), 1 growth hormone (GH), and thyroid-stimulating hormone-␤ (3,4). The Pit-1 transcript contains six exons and five introns (5) and encodes a 33-kDa protein containing two re-gions important for transcriptional regulation of target promoters: an N-terminal transactivation domain (TAD) spanning amino acids 1-80 (6) and a C-terminal DNA binding and dimerization domain consisting of a POU-specific domain (amino acids 128 -198) and a POU homeodomain (amino acids 214 -273) (7-9) (Fig. 1). Both the POU-specific domain and the POU homeodomain are necessary for high affinity DNA binding (10), while the transactivation domain is sufficient to activate transcription of a reporter gene when fused to the LexA or c-Jun DNA binding domain (6,10). Pit-1␤, a splice variant of Pit-1, arises from the use of an alternate 3Ј splice acceptor at the end of the first intron of the Pit-1 transcript (5,11,12) and contains a 26-amino acid (aa) insertion at position 48 in the transactivation domain ( Fig. 1). This 26-aa insertion domain endows the Pit-1␤ isoform with a range of unique negative and positive transcriptional properties. Pit-1␤ acts as a dominant negative repressor of transcription from the rPRL promoter in pituitary cells, such as GH 4 somatolactotrophs and ␣-thyroid-stimulating hormone thyrotrophs (5,(11)(12)(13), and inhibits the Ras response of the rPRL promoter in GH 4 cells (14,15). Moreover, Pit-1␤ fails, in nonpituitary cells, to interact functionally with Ets-1, a widely expressed transcription factor required for full reconstitution of rPRL promoter activity. 2 Yet, Pit-1␤ is even more competent to mediate signaling by PKA to the rPRL promoter in a HeLa reconstitution assay than is Pit-1. 3 Pit-1␤ demonstrates its repressive functions in pituitary cells but not in nonpituitary cells, implying that Pit-1␤ interacts with a cell type-specific factor to repress basal and Ras-activated rPRL expression. Because dimerization between splice variants had been identified as a mode of repression (18,19), it stood to reason that a pituitary-specific Pit-1/Pit-1␤ heterodimer might serve as such a repressor. The 26-aa ␤-domain has been conserved across the vertebrate lineage (5, 11, 12, 20 -22) (Tables I and II). The Nterminal 12 amino acids of the ␤-domain have been especially well conserved among mammals, avians (92%), and teleost fish (67%), whereas the C-terminal 14 amino acids, also found in Pit-1T, a thyrotroph-specific splice variant of Pit-1 (23), have been poorly conserved. This conservation of structure raises the possibility of conservation of function, such that the amino acid sequence of the ␤-domain, and not the resultant altered spacing of the TAD generated by the inserted ␤-domain, confers upon the Pit-1␤ isoform its unique properties. Thus, the ␤-domain would not simply disrupt a pre-existing structure but rather would encode an intrinsic functional motif. Transfections-DNA was introduced into HeLa or GH 4 cells by electroporation as follows. Approximately 2-3 ϫ 10 6 enzymatically dispersed cells were mixed with plasmid DNA in a sterile gene-pulse chamber and exposed to a controlled electrical field of 500 microfarads at 220 V, as described previously (32). Cells from individual transfections were then maintained in Dulbecco's modified Eagle's medium, 10% fetal calf serum, and 50 g/ml penicillin/streptomycin at 37°C. The nonspecific effects of the RSV promoter upon transcription factor availability was controlled for by including amounts of pRSV ␤-globin plasmid DNA in all assays to render the total pRSV DNA concentration constant. Luciferase Assays-Transient transfections were performed in triplicate, in at least two separate experiments. After incubation for 24 h, cells were harvested with phosphate-buffered saline containing 3 mM EDTA, pelleted, and resuspended in 100 mM potassium phosphate buffer (pH 7.8), 1 mM dithiothreitol. Cells were lysed by three cycles of freeze-thawing and by vortexing for 1 min between thaws. Cell debris was pelleted by centrifugation for 10 min at 10,000 ϫ g at 4°C, and the supernatant was used for subsequent assays. Luciferase activity in the supernatant was assayed as described previously (25). Samples were measured in duplicate using a Monolight 2010 Luminometer (Analytical Luminescence Laboratories, San Diego, CA). Total luciferase units were normalized to total protein present in extract supernatants. Protein assays were performed according to the method of Bradford (33) using commercially available reagents (Bio-Rad). Visualization of HA-tagged Pit-1 Proteins-Transient transfections were performed in duplicate. After a 24-h incubation, HeLa cells transfected with plasmid DNAs were harvested with phosphate-buffered saline containing 3 mM EDTA, pelleted, and resuspended in triethanolamine-SDS solubilization buffer (55 mM triethanolamine, 111 mM NaCl, 2.2 mM EDTA, and 0.44% SDS) (34) and a mix of protein inhibitors (leupeptin, pepstatin A, chymostatin, aprotinin, antipain, and bestatin, each at 6 ng/ml) at 4°C. Lysed extracts were passed through a 25-gauge needle seven times. The protein content of each extract was assayed according to the method of Lowry (35), using commercially available reagents (Bio-Rad). Equal amounts (100 g) of protein from each extract were separated on 15% SDS-polyacrylamide gels and transferred to Immobilon-P (polyvinylidene difluoride) membrane (Millipore Corp., Bedford, MA). The HA-tagged Pit-1 proteins were visualized with a mouse monoclonal anti-HA primary antibody (BAbCO, Richmond, CA), secondary sheep anti-mouse HRP-conjugated antibodies (Amersham Life Sciences), and ECL media (Amersham Life Sciences). Dilutions of 1:1,000 of the primary anti-HA monoclonal antibody and of 1:10,000 of the secondary rabbit anti-mouse antibody preparation were used. RESULTS Mutagenesis of the Pit-1 ␤-Domain-In order to determine whether the wild-type amino acid sequence of the ␤-domain is required for its unique properties, we constructed five mutant Pit-1␤s that contain different 26-aa substitutions for the ␤-domain at position 48 of the TAD. Thus, each mutant ␤-domain is of the same size and in the same position as the wild-type ␤-domain. Table III details the amino acid sequences of the mutant ␤-domains and the origins of the mutant sequences. Specifically, we chose residues derived from proteins with no known ability to modulate transcription and a short epitope tag as replacements for the wild-type ␤-domain. These mutant constructs, together with wild-type Pit-1 and the Pit-1␤ isoform, were each tagged with the HA epitope on the amino terminus, such that all expressed proteins would contain the same epitope in the same relative position in order to allow for their detection regardless of alterations of protein structure by the ␤-domain substitutions. Expression of Pit-1 Proteins-It has been previously shown that wild-type pRSV Pit-1 and pRSV Pit-1␤ express protein to different levels in transient transfection experiments and that the transcription potency of these two isoforms must be normalized to their levels of expression (11). The mutant Pit-1 proteins presented here might be expressed to levels different from either the wild-type Pit-1 or Pit-1␤. In order to exclude the effect of differences in protein expression level on transcription potency, we carried out a series of transfection experiments to find levels of input DNA that would yield similar levels of protein expression from the wild-type and mutant Pit-1 vectors. In a preliminary experiment, 10 g of each of the pRSV-HA Pit-1 constructs were introduced into HeLa nonpituitary cells by electroporation. Extracts from transfected cells were separated by SDS-PAGE, and Western blot analysis was used to determine the level of Pit-1 protein expression (data not shown). HA Pit-1␤ was expressed at lower levels than was HA Pit-1, HA Pit-1-FLAG was expressed at a level similar to HA Pit-1, and the other HA Pit-1 constructs were expressed at higher levels than was HA Pit-1. In order to find DNA doses that roughly equalized the levels of Pit-1 expression for each of the constructs, varying amounts of each of the pRSV-HA Pit-1 constructs were introduced into HeLa nonpituitary cells by electroporation in a series of experiments. Having determined the optimal amounts of plasmid DNA for each construct, as described above, we show in Fig. 2 that similar levels of Pit-1 protein expression can be achieved with these plasmid DNA doses. The plasmid amounts transfected were as follows: 10 g of HA Pit-1, 30 g of HA Pit-1␤, 5 g of HA Pit-1-BPV, 2 g of HA Pit-1-AU5, 10 g of HA Pit-1-FLAG, 5 g of HA Pit-1-INV, and 5 g of HA Pit-1-MYC. Equal amounts (100 g) of total protein from cell lysates of duplicate transfections were analyzed by SDS-polyacrylamide gel electrophoresis, except that lanes 17 and 18 were loaded with the same extract as in lane 16, but with 50 and 200 g of total protein (Fig. 2). This was done in order to show that we can detect a 2-fold decrease or increase in Pit-1 protein expression relative to that in lane 16. In the vector-only lanes (lanes 1 and 2), the anti-HA antibody does not detect any protein migrating in the Pit-1 range of 30 -33 kDa but does detect a nonspecific band of ϳ50 kDa whose intensity appears to correlate with the amount of total protein loaded. Examination of the relative amounts of HA Pit-1 versus the other Pit-1 constructs reveals that HA Pit-1 (lanes 3 and 4) and HA Pit-1␤ (lanes 5 and 6) were expressed at roughly equal levels and that the levels of all Pit-1 constructs were, with the exception of HA Pit-1-AU5 (lanes 9 and 10) and HA Pit-1-MYC (lanes 15 and 16), within 2-fold the level of HA Pit-1. HA Pit-1-AU5 was expressed at barely detectable levels at this amount of input DNA; however, this mutant Pit-1 protein was detectable at higher levels of input DNA, indicating that the protein can be expressed (data not shown). In contrast, HA Pit-1-MYC was expressed at a level more than 2-fold greater than was HA Pit-1 in one of the duplicates. The relative DNA doses required to generate similar Pit-1 and Pit-1␤ protein levels are consistent with previous findings and again show that Pit-1␤ displays some level of intrinsic instability (11). Additionally, our data indicate that alteration of the ␤-domain reverses this instability, since less input DNA is required for the mutants, thus mapping the source of the Pit-1␤ isoform instability to the sequence of the ␤-domain. The DNA doses noted above were used for all further experiments. Experiments carried out in HeLa and GH 4 cells in parallel showed that the relative pattern of expression of wild-type and mutant Pit-1 proteins was the same in both cell lines and that there were no cell-specific influences on Pit-1␤ protein production (data not shown). Differences in apparent mobility among the Pit-1 constructs were detected. Since sequencing had shown that all constructs contain the same number of nucleotides, and therefore encode the same number of amino acids, two explanations remained: post-translational modification of the substituted sequences or sequence-specific effects on gel mobility of the mutant ␤-domains. The latter effect has been observed in other systems (16). Mutant Pit-1␤s Function as Transcription Factors-This substitution mutagenesis experiment could have induced alterations in the three-dimensional structure of each mutant Pit-1␤ such that it could no longer activate transcription under any circumstances. Such a result would preclude the examination of the effects of changing the amino acid sequence of the ␤-domain on the specific aspects of transcriptional activation modulated by the ␤-domain. To address this problem, we utilized previous findings that in the nonpituitary HeLa, Ltk Ϫ , and Rat-6 cell lines, the transcription potency of the Pit-1␤ isoform, when normalized to its lower protein level, is similar to that of Pit-1 on the rPRL promoter (5,11). We used the HeLa nonpituitary cell line and our previously optimized Pit-1 protein-expression system to test the transcription potency of each mutant construct. The HA-tagged wild-type and mutant Pit-1s were introduced into HeLa nonpituitary cells with a rPRL promoter-driven luciferase reporter, and their ability to transactivate target promoter activity was measured. Fig. 3 depicts the results of a representative experiment. HA Pit-1␤ actually displayed a stronger effect on transcription of the target promoter compared with Pit-1 (29-versus 15-fold, respectively); this difference may be due to the slightly higher levels of Pit-1␤ expression with these amounts of input DNA (Fig. 2). All of the mutant Pit-1␤s were able to transactivate the rPRL promoter in the 15-30-fold range, except for HA Pit-1-AU5. Again, the transcription effect generally correlated with the level of Pit-1␤ protein expressed, as shown in Fig. 2 11-13). The precise mechanism for this cell type-specific inhibitory effect of Pit-1␤ remains unclear. The only structural difference between these two isoforms is the ␤-domain; we therefore tested whether the amino acid sequence of the ␤-domain imparts the Pit-1␤-mediated repression of rPRL promoter activity in pituitary cells. The Pit-1␤ mutant constructs were introduced into GH 4 pituitary cells by electroporation in the presence of a rPRL-driven luciferase reporter (Fig. 4). As shown previously, HA Pit-1 has little effect on rPRL promoter activity in this system (13,15), whereas HA Pit-1␤ acted as a dominant negative repressor of rPRL expression, decreasing reporter expression 3-fold from basal levels. Additionally, each of the five ␤-domain mutants lost the dominant negative effect attributed to the ␤-isoform splice variant. These results demonstrate that the wild-type amino acid sequence of the ␤-domain is necessary for interference with basal rPRL expression in pituitary cells and that the altered spacing of the TAD generated by the ␤-domain is not sufficient to cause transcriptional repression. Another attribute of the Pit-1␤ isoform is its ability to repress the oncogenic V-12 Ras signaling to the rPRL promoter, which normally requires a functional interaction between Pit-1 15-18). After 24 h, cells were harvested and analyzed by SDS-polyacrylamide gel electrophoresis. The blot was probed with a mouse monoclonal anti-HA epitope primary antibody (BAbCO). The numbers at the left mark the position of protein molecular weight markers (Life Technologies, Inc.). All lanes were loaded with extract containing 100 g of total protein, except for lanes 17 and 18, which were loaded with the same extract as lane 16, but with 50 and 200 g of total protein (i.e. with 0.5 times and 2 times as much total protein loaded). and Ets-1 (15). 2 Pit-1␤ repression of the Ras response appears to occur by Pit-1␤ forming a nonproductive complex with Ets-1. 2 Again, the ␤-domain could act either by disrupting the structure of the Pit-1 TAD or through properties inherent in its own sequence. In order to test whether repression of Ras signal transduction requires a wild-type ␤-domain, the mutant and wild-type Pit-1 constructs were introduced into GH 4 pituitary cells by electroporation in the presence of the rPRL-driven luciferase reporter and pSV Ras (Fig. 5). As documented previously, co-transfection of a Pit-1 construct enhances the Ras response from 11-fold in its absence to 32-fold in its presence, and co-transfection of the Pit-1␤ isoform not only failed to enhance the Ras response but actually reduced it to one-third the level achieved by Ras alone (Fig. 5). In contrast, each substitution mutation of the ␤-domain resulted in a switch of the Pit-1␤ phenotype, such that each no longer repressed the Ras response, but instead enhanced the Ras response of the rPRL promoter as effectively as did Pit-1, from 11-fold up to 24 -42-fold (Fig. 5). These data demonstrate that the wild-type sequence of the ␤-domain is required for repression of the Ras response of the rPRL promoter and, as before, that the altered spacing of the TAD generated by the ␤-domain is insufficient to cause transcriptional repression. ␤-Domain-specific Sequences Interfere with the Functional Interaction between Pit-1␤ and Ets-1 in Nonpituitary Cells-In addition to the role of Ets-1 in mediating the Ras response of the rPRL promoter, we have recently found that Ets-1 plays a critical role in determining basal rPRL promoter activity as well and that it does so by functionally and physically interacting with Pit-1. 2 A HeLa nonpituitary cell reconstitution system was used to demonstrate that Pit-1 synergizes with Ets-1 to optimally reconstitute rPRL promoter activity, whereas the Pit-1␤ isoform synergizes poorly, if at all, with Ets-1. To investigate whether ␤-domain-specific sequences interfere with the ability of Pit-1␤ to synergize with Ets-1, we assessed the ability of the Pit-1 mutants to interact functionally with Ets-1 in the HeLa reconstitution system. The Pit-1 constructs were introduced into HeLa nonpituitary cells by electroporation in the presence of the rPRL promoter-driven luciferase reporter and Ets-1 (Fig. 6). As documented previously, co-transfection of a Pit-1 construct enhances the Ets-1 effect from 14-fold in its absence to 1,261-fold in its presence. Alternatively, co-transfection of the Pit-1␤ isoform only enhanced the Ets-1 response from 14-to 103-fold (Fig. 6). Despite the fact that each substitution mutation of the ␤-domain contained a 26-aa insert, each mutant resulted in a striking switch in response such that each ␤-mutant was now able to functionally interact with Ets-1 in a manner indistinguishable from Pit-1, which is devoid of any insert (Fig. 6). These data once again demonstrate the importance of the wild-type ␤-domain sequence for the ␤-specific effect. ␤-Domain-specific Sequences Confer upon Pit-1␤ an Enhanced Ability to Respond to PKA in Nonpituitary Cells-We have previously shown that Pit-1 serves to significantly en- hance the PKA response in a HeLa nonpituitary cell gene transfer reconstitution assay (29). In this HeLa nonpituitary cell reconstitution assay, we have also demonstrated that the Pit-1␤ isoform is able to enhance the PKA response more effectively than does Pit-1, 3 showing that the unique properties of Pit-1␤ are not all negative. This enhancement of Pit-1 function by the ␤-domain demonstrates that this motif is not simply an inhibitory motif but can function to enhance the transcription activation induced by select signaling pathways. To assess the ability of the substituted ␤-domains to enhance the the PKA effect, the Pit-1 constructs were introduced into HeLa nonpituitary cells by electroporation in the presence of the rPRL promoter-driven luciferase reporter and the ␤-catalytic isoform of PKA (Fig. 7). PKA alone enhanced rPRL promoter activity 3-fold, while co-transfection with HA Pit-1 increased the PKA response to 96-fold (Fig. 7). Co-transfection with Pit-1␤, however, increased the PKA response to 263-fold, in agreement with previous results. 3 The mutant Pit-1␤s synergized with PKA less well, increasing the PKA effect from 3-fold, in the absence of any transfected Pit-1, to 14 -111-fold, in the presence of the various Pit-1␤ mutants. Of note, the BPV and FLAG ␤-domain mutants functioned similarly to HA Pit-1 in this assay, whereas the AU5, INV, and particularly the MYC ␤-domain mutants functioned very poorly with respect to PKA enhancement. These results cannot be explained by differences in protein expression alone (Fig. 2), since ␤-domain mutants expressed at high levels (the INV and MYC) have minimal PKA response, and the mutant expressed poorly (AU5) displays a detectable PKA response (Fig. 7). The observation that both Pit-1 and Pit-1␤ enhance the PKA response indicates that sequences common to both can function to mediate the PKA effect. Additionally, this is the only assay in which the ␤-domain mutations did not equally convert their response to that of the Pit-1 isoform but instead often disrupted the response, suggesting that these common sequences may lie at or near aa 48, the site of ␤-domain insertion. Finally, since Pit-1␤ was unique in its ability to enhance the PKA effect above the level seen with Pit-1, sequences specific to the ␤-domain clearly further modulate the PKA effect. The Cell Type-specific Inhibitory Effect of Pit-1␤ Is Not Mediated by Pit-1-The observation, both here and previously, that Pit-1␤ acts in pituitary cells as a dominant negative effector (Fig. 4) (5, 11-13), yet acts in nonpituitary cells as a positive effector (Figs. 3 and 7) (5, 11), 3 argues that a cell type-specific factor is required for the cell-specific, dominant negative effects of the Pit-1␤ isoform. To confirm that the differential effects of the two Pit-1 isoforms in GH 4 pituitary cells occur at various DNA doses with respect to the rPRL promoter, increasing amounts of each Pit-1 expression vector were transfected separately into GH 4 cells, and PRL-luciferase reporter activity was measured. Fig. 8A shows that, in agreement with previous results of a single-dose study (Fig. 4), increasing doses of HA Pit-1 had little effect on rPRL promoter activity, while increasing doses of HA Pit-1␤ repress basal rPRL activity in GH 4 pituitary cells to levels one-third those in the absence of Pit-1␤. Since Pit-1 is a pituitary-specific transcription factor that is required for PRL gene expression, it seemed a possible candidate target for Pit-1␤, via the formation of a nonproductive Pit-1⅐Pit-1␤ heterodimer. To directly test this hypothesis, we examined the effect of forming Pit-1⅐Pit-1␤ heterodimers in HeLa nonpituitary cells, by introducing increasing DNA doses of Pit-1␤ together with a constant amount (10 g) of Pit-1 expression vector. Fig. 8B shows that co-transfecting increasing DNA doses of Pit-1␤ resulted in a significant and Pit-1␤ dose-dependent enhancement of the Pit-1 effect, increasing the effect from 7-fold, in the absence of Pit-1␤, to 1,337-fold, in the presence of the highest dose of Pit-1␤ (30 g). Indeed, the effects of these two Pit-1 isoforms were more than additive (i.e. 10 g of Pit-1 alone was 7-fold, and 30 g of Pit-1␤ alone was 28-fold, but together the effect was 1,337-fold) and thus synergistic. Clearly, Pit-1 is not the cell type-specific target mediating the inhibitory effect of Pit-1␤. Together, our data suggest that another cell type-specific factor mediates the repressor function of Pit-1␤ in GH 4 cells and that it is the presence and precise amino acid sequence of the ␤-domain that imparts these selective functions to the Pit-1␤ isoform. DISCUSSION Because Pit-1 is expressed at levels that are 7-8-fold greater than Pit-1␤, it has been assumed that Pit-1 is the dominant functional isoform with respect to pituitary-specific gene expression and cell function. However, the level of Pit-1 expression is very high (0.5% of total protein) (8,17), and thus the actual amount of Pit-1␤ expression is itself quite high for a transcription factor (11). Moreover, Pit-1␤ is the only isoform found in salmon and turkey, and the amino acid sequence of the ␤-domain, particularly the first 12 amino acids, is highly conserved, from teleosts to primates (Tables I and II), suggesting that the Pit-1␤ isoform may preserve the ancestral gene structure. In this paper, we have specifically addressed the potential functions of the Pit-1␤ isoform compared with Pit-1 and ad-dressed the contributions of the ␤-domain structure to its various functions. Here we show, controlling for equivalent levels of protein expression, that the Pit-1␤ domain functions as a sequence-specific molecular switch and that the amino acid sequence of the ␤-domain confers upon the Pit-1 ␤ isoform unique transcriptional properties. The specific amino acid sequence of the ␤-domain is required to enhance the responsiveness of the Pit-1␤ isoform to PKA-mediated signaling and to block its ability to mediate basal and Ras-activated rPRL gene transcription. The difference in Pit-1␤ protein expression relative to Pit-1 appears to be intrinsic to the Pit-1␤ coding sequences rather than a pituitary-specific regulation of splicing (5,11,12). Additionally, both the Pit-1␤ and Pit-1 expression constructs utilized here are driven from identical Rous sarcoma viral promoters yet exhibit the relative overexpression of Pit-1 versus Pit-1␤ in HeLa nonpituitary cells (Fig. 2). These expression levels must be due to either differences in translational efficiency or in mRNA and/or protein stability that are independent of the pituitary cell type. The latter seems most likely, since these constructs lack any differences in 5Ј-or 3Ј-untranslated sequences that might regulate translational efficiency. Indeed, from these studies, it is clear that the ␤-domain is the intrinsic structure that governs Pit-1␤ protein expression level, since substitution mutagenesis of the ␤-domain usually increases the relative expression level of the Pit-1␤ isoform (Fig. 2) and because Pit-1 and Pit-1␤ only differ by the ␤-domain. To circumvent problems in interpretation of the data that might be due to differential protein expression levels, we carefully adjusted the plasmid DNA concentrations of the various ␤-domain mutants that were transfected to achieve equivalent levels of Pit-1 and Pit-1␤ protein production (Fig. 2). In so doing, we have directly demonstrated that Pit-1␤ is as efficient a transactivator of rPRL promoter activity as is Pit-1 (Fig. 3), a result previously suggested by mathematical normalization (11), and that the ␤-domain sequence can be altered without significantly diminishing its transcription potency for the rPRL promoter. The development of any mechanistic model that explains the differential effects of Pit-1 and Pit-1␤ must take into account (i) the cell-specific behavior of the ␤-domain (Figs. [3][4][5]; (ii) the optimal enhancement of the PKA response by Pit-1␤ (Fig. 7); and (iii) the marked transcription synergy of Pit-1 and Pit-1␤ (Fig. 8B). The model that we propose is that combinatorial interactions of Pit-1 isoforms with other transcription factors control rPRL promoter activity and that the precise combination of factors dictates ultimate effects. For example, we have previously shown that Pit-1 and Ets-1 interact functionally and physically to allow both basal and Ras-activated transcription from the rPRL promoter and that Pit-1␤ fails to interact functionally with Ets-1 in a reconstitution of basal rPRL promoter activity, although it retains its ability to interact physically with Ets-1 (15). 2 Since Ets-1 is expressed in GH 4 pituitary cells but not in HeLa cells, 2 we utilized the HeLa cell system to show that Pit-1␤ interferes with the transcriptional potency of Ets-1 in a ␤-domain sequence-specific manner (Fig. 6). Moreover, mutation of the ␤-domain reverses the inhibitory effect of Pit-1␤ in GH 4 pituitary cells (Figs. 4 and 5), which contain Ets-1, whereas the ␤-domain, or mutations thereof, do not decrease the transcriptional potency of Pit-1␤ in HeLa nonpituitary cells (Fig. 3), which lack Ets-1. These data indicate that a Pit-1/Ets-1 functional interaction is productive, while the Pit-1␤/Ets-1 interaction is actually inhibitory. In keeping with our hypothesis, we propose that the differential effects of the ␤-domain on the PKA response are due to a functional interaction of Pit-1␤ with an as yet unidentified transcription factor (Factor X) that is a target of the PKA signaling pathway, and that the Pit-1␤ ⅐ Factor X functional interaction is more productive than the Pit-1/factor X functional interaction. Moreover, the Pit-1 ⅐ Pit-1␤ combination appears to be more potent than either the Pit-1 ⅐ Pit-1 or Pit-1␤ ⅐ Pit-1␤ combinations (Fig. 8B), further corroborating our hypothesis. In summary, these data indicate that the ␤-domain encodes a transcription switch motif that selectively impairs the functional interaction with Ets-1 yet enhances the functional interaction with Pit-1 and with a component of the PKA pathway. The ability to generate an array of Pit-1 isoforms with altered transcriptional properties in a pituitary cell nucleus might allow different signaling intermediates to "select" specific Pit-1 isoforms with which to interact and thus to regulate pituitary hormone production. Such differential interactions might allow for an enhanced repertoire of signal integration for the highly regulated pituitary hormones.
6,542.2
1996-11-15T00:00:00.000
[ "Biology" ]
Report on COVID-19 Verification Case Study in Nine Countries Using the SIQR model , Introduction Using the SIQR model proposed by Takashi Odagaki 1) , this report examines the epidemic trend of COVID-19 from February to May 2020 in nine major countries, and identifies specific trends in the actual state of infection in Japan. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020. 10.07.20208298 doi: medRxiv preprint When the Diamond Princess returned to the port of Yokohama on February 3, the total number of infected people on board was 712. While I was keeping a close eye on the situation after the ship began disembarking on February 19, I noticed that the trend in the number of infected people in Japan during March and April differed from the rest of the world. Figure 1 compares the cumulative number of people infected after reaching 100 by country, and it can be seen that Japan's trend is quite pequliar. Why are Japan's trends up to the date of the decision to postpone the Olympics (3/24) different from those of Germany and France? Why is this different from Sweden, which also adopted the herd immunity policy? And why is the number of cases of infection increasing rapidly after the date of the decision to postpone? Through theoretical analysis of these questions using the SIQR model, we summarize below how the situation was progressing. Methods of Analysis First, we summarize the method of the SIQR model based on the paper by Takashi Odagaki. The SIQR model, which is a modification of the conventional SIR model, is unique in that it provides a theoretical understanding of epidemic phenomena by separating quarantined infected people identified by testing and community-acquired people who remain untested, and also provides a theoretical consideration of measures to control the epidemic. Setting the Basic Formula The statistical data reported daily is the "number of new infections", which can be treated as the "number of positive cases per day" and can be expressed as ΔQ(t) = qI(t) in the SIQR model. That is, I(t) = ΔQ(t)/q = ΔQ(t)/(βN-γ-λ). where I(t) is the number of infecteds at large, (q) is the quarantine rate, βN is the transmission coefficient, and γ is the removal rate and λ are determinants of the decay (or growth) rate of infected at large, which can be expressed as (βN-q-γ). The next step is to determine the effectiveness of social distancing (a) and the enhancement factor of quarantine rate (b), which is a factor of human-induced infection control measures, as well as the rate of increase or decrease in the number of new define λ = (1-q'-a)βN-qbγ as the λ that takes into account the quarantine rate of the new patients immediately after they are infected (q') of infection-positive individuals to be. Since no one is isolated immediately after infection due to a long incubation period, the quarantine rate of the new patients immediately after they are infected (q') shall be regarded as zero in principle. Verification Procedure The "number of positive cases per day" ΔQ(t) is approximated by an exponential function ΔQ(to)e λ(t-to) with a coefficient of λ, as in the case of the number of infecteds large I(t). Based on the distribution of the number of positive cases per day, we also classify them into an initial phase, an expansion phase, a transition phase, and a decay phase, and establish a continuous period classification, from the first to the fourth period. The coefficient λ and the initial value ΔQ(to) are determined by obtaining an exponential approximation curve for each period based on the actual values. In the case of China and Korea, the period of spread of infection is earlier than the first period and is considered to be the expansion phase. Next, γ, βN, q, a, and b are set as follows. 1) Since the removal rate γ of community-infected persons is considered to be almost constant during the entire period, the number of days of cure is estimated to be 33 days, and the inverse of 0.03 is set. 2) The quarantine rate q for the 2nd period (expansion phase) is considered to be very low, . CC-BY 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint and so set q = 0. 3) The transmission coefficient βN in the 2nd period (expansion phase) are set based on the coefficient λ determined by an exponential approximation curve using data from the 2nd period(βN=λ+q+γ). This βN is used in the 1st ,3rd and 4th period. 4) The quarantine rate q in the 3rd period (transition phase) is determined by an exponential approximation curve based on the 3rd period data. It is set up from the coefficient λ (as close to 0 as possible), βN and γ. (q = βN-γ-λ). The same q is used in the 1st and 4th period. 5) The effectiveness of social distancing (a) and the enhancement factor of quarantine rate (b) in the 4th (decay) period should be set so that λ = (1-q'-a) βN-qb-γ based on the factor λ determined by the exponential approximation curve based on the 4th period data and the previously set βN, q, and γ. The λ re-set after setting (a) and (b) is defined as the amount for determining the decay (or growth) rate of infected at large. The default value of the effectiveness of social distancing (a) is 0, and the default value of the enhancement factor of quarantine rate (b) is 1.0. In the 1st (initial) period, (b) is treated as variable, and in the 4th (decay) period, (a)and (b) are treated as variable. Analysis of Country Data Based on the above analysis, we will analyze the actual number of infected people in Japan and Tokyo, as well as the actual number of infected people in other countries, to clarify the situation in each country and the parameters of the SIQR model. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint Data on population and number of tests conducted are from Worldometers's COVID-19 5) . Verification using Japan and Tokyo data Due to the peculiarity of Japan's trend up to the date of the decision to postpone the Olympics (3/24), the 1st period is set for 2/18 to 3/23. The 2nd expansion period is set for 3/24 to 4/10, the 3rd transition period is set for 4/11 to 4/18, and the 4th decay period is set for 4/19 to 5/19. The theoretical analysis and parameter verification values based on actual daily positive case data are summarized in Table 1, together with those of foreign countries. (1) The enhancement factor of quarantine rate (b) In the 1st period in Japan (the inspection suppression phase), the enhancement factor of quarantine rate (b) is calculated to be 0.57 due to the λ obtained by the exponential approximation curve and the βN, q and γ determined in from the 2nd period to the 4th period, since the effectiveness of social distancing (a) can be regarded as zero. In other words, until the date of the decision to postpone the Olympic Games in Japan (3/24), Japan had taken about 60% of the measures to prevent the isolation of people from the second and subsequent phases by suppressing testing. (2) The effectiveness of social distancing (a) In the 4th decay period in Japan, the effectiveness of social distancing (a) is calculated to be 0.53 due to the λ obtained by the exponential approximation curve and the βN, q, and γ determined in the 2nd and 3rd period, since the enhancement factor of quarantine rate (b) can be regarded as 1.0. This means that the behavioral restraint rate during the decay phase in Japan is estimated to have been about 50%. Using the same method, the effectiveness of social distancing (a) during the decay phase based on Tokyo data is 0.70, and it is estimated that about 70% of people in Tokyo refrained from contacting each other. According to the results of the verification of Tokyo data using the same method, the effective decay (or growth) rate of infected at large was 9.86. It is estimated that there are approximately 50,000 infecteds at large compared to 5,000 positive cases, with the fraction of infecteds at large in the population of 0.365%. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint Verification using foreign data (1) Verification using China data The period classification in China is set as follows: the first expansion period is set for 1/23-2/2, the second transition period is set for 2/2-2/4, and the third decay period is set for 2/5-3/6. The transmission coefficient was 0.332, the highest among the nine countries, and the quarantine rate of community-infected persons was 0.345, which is the highest among the nine countries. The effective decay (or growth) rate of infected at large was 2.19, with approximately 180,000 infecteds at large compared to 83,000 positive cases. The fraction of infecteds at large in the population is estimated to be 0.013% (see Figures 6,7 and Table 1). (2) Verification using South Korea data In Korea, the 1st expansion period is set for 2/20-2/29, the 2nd transition period is set for 3/1-3/4, the 3rd decay period(phase1) is set for 3/5-3/18, and the 4th decay period(phase2) is set for 3/19-5/19. The transmission coefficient was 0.266 and the quarantine rate of community-infected persons was 0.22, which were both nearly twice as high as those in Japan. In Korea, the spread of infection was rapid and the rate of decay was fast. The enhancement factor of quarantine rate (b) in the 3rd period was 1.78, contributing to the decrease in the transmission coefficient after measures βN. It can be said that the policy of early detection and isolation through thorough testing is effective. The effective decay (or growth) rate of infected at large was 3.32, and for the 11,000 positive cases, there were approximately 37,000 infecteds at large. The fraction of infecteds at large in the population is estimated to be 0.072%. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint (3) Verification using Italy data In Italy, the period classification is set as follows: the 2nd expansion period is set for 3/1-3/27, the 3rd transition period is set for 3/28-4/2, and the 4th decay period is set for 4/3-5/19. The transmission coefficient was 0.195, and the quarantine rate of community-infected persons was 0.165, which is about 30-40% higher than that of Japan. On the other hand, the effectiveness of social distancing in the 4th period was 0.21, which was smaller than in Japan. It appears that the behavioral restraint was not as thorough as in Japan. The effective decay (or growth) rate of infected at large was 6.03, with approximately 1.37 million infecteds at large compared to 230,000 positive cases. The fraction of infecteds at large in the population is estimated to be 2.26% (see Figure 10,11 and Table 1). is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint (4) Verification using France data In France, the period classification is set as follows: the 2nd expansion period is set for 3/1-3/31 ,the 3rd transition period is set for 3/31-4/11, and the 4th decay period is set for 4/12-5/19. The transmission coefficient was 0.209, and the quarantine rate of community-infected persons was 0.204, which was 40%-50% lower than that of Japan. On the other hand, the effectiveness of social distancing in the 4th period was 0.28, which was smaller than in Japan. It appears that the behavioral restraint was not as thorough as in Japan. The effective decay (or growth) rate of infected at large was 3.08, and for every 180,000 positive cases, there are approximately 560,000 infecteds at large. The fraction of infecteds at large in the population is estimated to be 0.85% (see Figures 12,13 and Table 1). (5) Verification using Germany data The period classification in Germany is set as follows: the 2nd expansion period is set for 2/23-3/20, the 3rd transition period is set for 3/21-3/28, and the 4th decay period is set for 3/29-5/19. The transmission coefficient was 0.238, and the quarantine rate of communityinfected persons was 0.21, which was about 60%-70% higher than that of Japan. On the other hand, the effectiveness of social distancing in the 4th period was 0.21, which was smaller than in Japan. It appears that the behavioral restraint was not as thorough as in Japan. The effective decay (or growth) rate of infected at large was 4.58, with approximately 810,000 infecteds at large compared to 180,000 positive cases. The fraction of infecteds at large in the population is estimated to be 0.97% (see Figures 14,15 and Table 1). is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint (6) Verification using Spain data In Spain, the period classification is set as follows: the 2nd expansion period of is set for 3/2 to 3/25, the 3rd transition period of is set for 3/26 to 4/1, and the 4th decay period of is set for 4/2 to 5/19. The transmission coefficient was 0.279, and the quarantine rate of community-infected persons was 0.249, which was 80%-100% higher than that of Japan. On the other hand, the effectiveness of social distancing in the 4th period was 0.25, which was smaller than in Japan. It appears indicating that the behavioral restraint was not as thorough as in Japan. The effective decay (or growth) rate of infected at large was 3.66, with approximately 850,000 infecteds at large compared to 230,000 positive cases. The fraction of infecteds at large in the population is estimated to be 1.82% (see Figures 16,17 and Table 1). is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint (7) Verification using USA data In the U.S., the period classification is set as follows: the 1st initial period is set for 3/3 to 3/18, the 2nd expansion period is set for 3/19 to 4/4, the 3rd transition period is set for 4/4 to 4/23, and the 4th decay period is set for 4/24 to 5/19. After the 2nd period, the transmission coefficient was 0.148 and the quarantine rate of community-infected persons was 0.100, the same level as in Japan. The effective decay (or growth) rate of infected at large was 9.59, with approximately 14.65 million infecteds at large compared to 1.53 million positive cases. The fraction of infecteds at large in the population is estimated to be 4.43% (see Figures 18,19 and Table 1). (8) Verification using Sweden data In Sweden, the period classification is set as follows: the 2nd expansion period is set for 3/6-4/10, the 3rd transition period is set for 4/11-4/23, and the 4th decay period is set for 4/24-5/19. The transmission coefficient was 0.113, and the quarantine rate of community-infected persons was 0.0673, which was 20%-40% lower than that of Japan. The behavioral restraint rate in the 4th decay period was 0.24, which is smaller than in Japan. It appears indicating that the behavioral restraint was not thorough. The effective decay (or growth) rate of infected at large was 14.1, with approximately 430,000 infecteds at large compared to 31,000 positive cases. The fraction of infecteds at large in the population is estimated to be 4.32% (see Table 1). Sweden tends to have a very low rate of the quarantine rate of infecteds at large and a very high rate of the fraction of infecteds at large. This may reflect the herd immunity policy . is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020. 10 is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint 4.Comparison of measures in each country We divide infection control measures into the active group (λ ≤ 0.1), the average group (-0.1<λ<-0.02), and the passive group (-0.02≤λ<-0) according to the magnitude of the decay (or growth) rate of infected at large (λ) in each country and compare them. The right direction of Figure 22 represents the level of testing/quarantine systems, and the downward direction represents the magnitude of the effectiveness of social distancing (+ early quarantine effect). is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint 3)The United States and Sweden are the only countries with a poor testing/quarantine system and a low effectiveness of social distancing. Sweden, which has a herd immunity policy, has the lowest rates for both parameters. From the above, it can be said that the measures taken by South Korea, which did not have a lockdown system and adopted a policy of early detection and quarantine of positive cases through PCR testing, were effective. 5. Discussion The following is a summary of the characteristics of the new corona infection trend in Japan. Figure 24 shows the trends for the quarantine rate of infecteds at large and the transmission coefficient in each country, and you can see that there is a very high correlation between the two factors. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint (3) Quarantine rate and Effective decay (or growth) rate of infected at large Figure 25 shows the relationship between the quarantine rate and the effective decay (or growth) rate of infected at large and the smaller the quarantine rate, the more the effective decay (or growth) rate of infected at large tends to be large. In the case of Japan, the effective decay (or growth) rate of infected at large was 8.52, following Sweden's 14.1 and the U.S.'s 9.59. (4) Number of positive cases and Fraction of infecteds at large in the population Figure 26 shows the relationship between the number of positive cases per 100,000 and the fraction of infecteds at large in the population, and you can see that when the number of positive cases is low, the fraction of infecteds at large tends to be small. In Japan, the fraction of infecteds at large is 0.109%, which ranks third after China and South Korea. On the other hand, in Sweden and the U. S., where the number of positive cases per 100,000 people is high, the fraction of infecteds at large is 4.3% to 4.4%, and in Spain and Italy the rate is 1.8% to 2.3%. States, Sweden, and other countries, the SIQR model has the potential to reproduce the current situation (see Table 2). In particular, the fraction of infecteds at large in the population of 0.365% in Tokyo is close to the Softbank Group's antibody possession rate of 0.43%. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint (6) Effectiveness of social distancing and Movement reduction rate The effectiveness of social distancing during the 4th period was estimated to be 53% in Japan as a whole and 70% in Tokyo. This is higher than in the Google Mobile Location Survey 6) , which uses mobile device location data. In other words, the report's data show that during the 4th period (4/19-5/19), the average decline in "retail and entertainment," "transfer station" and "workplace" was 39% for Japan as a whole and 54% for Tokyo. In the case of Sweden, the average movement reduction rate in the same report happened to match the 24% effectiveness of social distancing in the 4th decay period, but in all other countries, the effectiveness of social distancing tended to be smaller than the mobile survey's movement reduction rate (see Figure 27). In the case of Japan, the level of self-restraint among its citizens is considerably higher than in the United States and Europe, but the movement reduction rate does not necessarily reflect the effectiveness of social distancing. In locked-down countries such as Italy, France, and Spain, the movement reduction rate is as high as 60-70%, while the effectiveness of social is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint distancing is as low as 20-30%. One possible reason for this is the high fatality rate in these countries. The relationship between the fatality rate and the difference between the movement reduction rate and the effectiveness of social distancing shows a fairly high correlation ( Figure 28). In other words, countries with higher fatality rates tend to have less of a lockdown effect and lower effectiveness of social distancing. It is important to note that the reasons for the low effectiveness of social distancing in locked-down countries are (1) the toxicity and highly infectious variants of the new coronaviruses increase the lethality rate, so that reduced movement alone may not reduce contact infection, and (2) differences in lifestyle habits, such as resistance to wearing masks, hugging, handshaking, and loud conversations, may not reduce contact infection through reduced movement alone. (7) The peculiarity of Japan The number of tests conducted in Japan was extremely low, but this alone cannot explain the peculiar curve in the number of cases shown in Figure 1. Other possible reasons for Japan's peculiarity are as follows. 1) The Ministry of Health, Labour and Welfare's "How to Prevent a New Coronavirus" (Feb.17) announced that a "four-day fever rule" was established in which patients must have a fever of 37.5°C or higher for four days or more to be consulted. As a result of the strict implementation of the rule, the percentage of PCR tests performed in the whole country to the number of consultations with the Returnee and Contact Center was 2.9% by 11 March and 4.0% by 31 March. 2) Until the date of the decision to postpone the Olympics (March 24), it was expected that information on the number of infected people unfavorable to the hosting of the Olympic Games would be as low as possible, but verification of the SIQR model showed that the is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 9, 2020. ; https://doi.org/10.1101/2020.10.07.20208298 doi: medRxiv preprint enhancement factor of quarantine rate was 0.57 during the first period of test suppression up to the date of the decision to postpone the Olympics, and that tests were suppressed by about 40% compared to the second and subsequent periods. This is the reason why the Japanese curve in Figure 1 is peculiar for the period from February 17 to March 24. 3) After the date of the decision to postpone, there is no longer a brake on the release of information on the number of infected people, but the Ministry of Health, Labour and Welfare and the National Institute of Infectious Diseases and other governmental organizations monopolized the data for unified analysis of infection data. As a result, no measures were taken to expand PCR testing by the private sector and the "four-day fever rule" continued further. 4) Verification of the SIQR model showed that as a result of test suppression, the quarantine rate of infecteds at large in Japan was 0.13, which was 20% to 60% smaller than in other countries (0.165 to 0.345) except for the United States (0.100) and Sweden (0.0673).
5,913.6
2020-10-09T00:00:00.000
[ "Economics" ]
Time-Varying Pseudorandom Disturbed Pattern Generation Algorithm for Track Circuit Equipment Testing To improve the test accuracy and fault coverage of high-speed railway-related equipment boards, a time-varying pseudorandom disturbance algorithm based on the automatic test pattern generation technology in chip testing is proposed. The algorithm combines the pseudorandom pattern generation algorithm with the deterministic pattern generation D algorithm. The existing pseudorandom number generation method usually requires random seeds to generate a series of pseudorandom numbers. In this algorithm, the system timer is used as the random seed to design a pseudorandom pattern generation method of time-varying seed to improve the randomness of pseudorandom pattern generation. In addition, in combination with the D algorithm, this work proposes a new switching logic between two algorithms by counting invalid pattern proportions. When the algorithm is applied to track a circuit netlist, the fault coverage can reach near 100%. However, the large-scale circuit fault coverage cannot easily reach 100%. The test results for the standard circuits of different sizes show that at the same time, compared with the independent pattern generation methods, the proposed algorithm can improve fault coverage by more than 50% and 30% and significantly improve the pattern generation efficiency. Therefore, it can be used perfectly in the subsequent construction of high-speed railway equipment test platforms. Introduction Transportation is the lifeblood of the national economy and social development, and high-speed railways have been playing an important role in the comprehensive transportation system. The performance, safety testing, and verification of high-speed railway equipment are directly related to the safe operation of high-speed railway systems. Therefore, improving the test accuracy and fault coverage of related equipment boards and developing the corresponding test platform has become an urgent issue to be solved. A test case is an effective solution to the traditional board test. CEDEX Interoperability Railways Laboratory established the Eurocab test platform for semi-automatic tests and realized the analysis and evaluation of European railway equipment through a test case [1]. Wu et al. proposed a test generation technique based on a colored Petri net model for an advanced satellite-based train control system to generate more complete test cases. The number of test cases increased by 23% after improving the model [2]. For the Chinese Train Control System 3, Zhou et al. optimized test cases through unified modeling language (UML) model diagrams using static and dynamic modeling mechanisms of the UML technology, effectively improving the generation efficiency and quality of cases [3]. Li et al. proposed the timed automata mutation test to simulate most types of system fault models, modeled the radio block center switching process, and improved the mutation score by approximately 11.8% through a mutation analysis. New test cases were generated to improve its integrity [4]. Although the aforementioned test case method has made great progress, it still needs to be improved in edge detection, system stability under limiting conditions, and test accuracy. The chip contains millions of logic gates. In its design process, many methods and EDA tools can verify its function, including automatic test pattern generation (ATPG) technology [5]. If the high-speed railway equipment or board is described as a circuit model, the method used in the integrated circuit design [6] thus had a reference significance to the system level test of the high-speed railway equipment. The D algorithm is the first widely used deterministic test pattern generation algorithm in the ATPG technology. With the continuous improvement of chip complexity, the limitations of the D algorithm, which has a large number of backtracking and invalid choices, are constantly emerging [7]. Path-oriented decision-making (PODEM) is an improvement of the D algorithm, which reduces the backtracking times and blind trials of the D algorithm [8]. The fanout-oriented algorithm (FAN) is an improvement on PODEM, which significantly improves the generation efficiency by limiting the search space of ATPG to reduce the generation time and accelerate backtracking [9]. In addition, there are many other popular algorithms such as HITEC and Socrates. HITEC presented a targeted D element technique, which greatly increases the number of possible mandatory assignments and reduces the over-specification of state variables which can sometimes result when using a standard PODEM algorithm [10]. Socrates, based upon the sophisticated strategies of the FAN algorithm, led to a considerable reduction of the number of backtrackings and earlier recognition of conflicts and redundancies [11], but the generation process is still complicated. Among the non-deterministic pattern generation methods for reducing the pattern generation time, pseudorandom pattern generation based on the linear feedback shift register (LFSR) is one of the most typical methods of creating a test pattern. It uses a pseudorandom number generator to generate test patterns and calculates the fault coverage of the generated patterns through fault simulations [12]. Compared with the deterministic pattern generation method, the pseudorandom generation method is simple, but it is more difficult to achieve a higher fault coverage. Moreover, the random seed of this method is single, and the randomness is limited. Souza et al. proposed a method combining the LFSR and deterministic generation algorithm [13]. However, there is still room for improvement in its randomness and generation efficiency. Based on the combined generation algorithm, in this work, a time-varying pseudorandom disturbance ATPG algorithm, combining the pseudorandom pattern generation algorithm with the deterministic pattern generation D algorithm, is proposed. The existing pseudorandom number generation method usually requires random seeds to generate a series of pseudorandom numbers. In this algorithm, the system timer is used as the random seed to design a pseudorandom pattern generation method of time-varying seeds to improve the randomness of pseudorandom pattern generation. In addition, in combination with the D algorithm, here, a new switching logic between two algorithms by counting the invalid pattern proportion is proposed. The algorithm is applied to the transmitter and receiver board system of the ZPW-2000A track circuit, and the pattern generation of its golden model [14] circuit netlist is performed. Then, the efficiency of the algorithm is tested by comparing the circuits S713, S1423, and S9234. Time-Varying Pseudorandom Disturbance ATPG Algorithm The ATPG algorithm uses the gate-level netlist of a circuit as the input file and uses it to create a fault list to complete the pattern generation process. The fault list contains all possible fault types in the circuit, including the typical stuck-at fault, stuck-open fault, bridging fault, and some atypical faults. In this work, only the most common and effective stuck-at fault is considered, which represents the situation in which one line of the circuit is fixed to logic 1 or logic 0 and is represented as s-a-1 or s-a-0, respectively [15]. Netlist files are obtained through a logical synthesis of the register-transfer level code designed by Verilog. A. Deterministic Pattern Generation Algorithm Based on the D Algorithm The D algorithm is the most widely used deterministic pattern generation algorithm. It is based on path sensitization at the logic gate level [16] and uses a five-valued logic (0, 1, X, D, D) to describe the states of each lead in the circuit under the failure condition. The principle is shown in Figure 1. files are obtained through a logical synthesis of the register-transfer level code desi by Verilog. A. Deterministic Pattern Generation Algorithm Based on the D Algorithm The D algorithm is the most widely used deterministic pattern generation algori It is based on path sensitization at the logic gate level [16] and uses a five-valued log 1, X, D, D -) to describe the states of each lead in the circuit under the failure condition principle is shown in Figure 1. "0" indicates the signal with a normal value of 0 and fault value of 0. "1" indi that the normal value is 1, and the fault value is also 1. "D" represents the signal w normal value of 1 and fault value of 0, which can be recorded as 1/0. "D -" represe signal with a normal value of 0 and fault value of 1, which can be recorded as 0/1 indicates that the value is undetermined. The D algorithm is also implemented in C guage and consists of three steps: (a) Fault sensitization (backward): The influence of the fault can be reflected by dri the signal to be the opposite logical value of the fault. That is, activating the s-a-0 makes the input value of the line set to 1, whereas activating the s-a-1 fault make input value of the line set to 0. Thus, the logic value that the input can activate fault is obtained through a further inverse calculation [17]. (b) Fault propagation (forward): The D signal is propagated to the output end o circuit through one or more paths by setting the input node value of the relevant gates outside the fault point, so that it can be detected from the output end, and inverse to improve the test pattern based on the hypothesis [18]. (c) Line confirmation: The undetermined signal value in the circuit is obtained usin existing node value until a set of non-contradictory values of the original inpu of the circuit is obtained, which is a set of test patterns. When the selected value 1 is inconsistent with the previously assigned value of this node, it is necessary back to the previous node and select again [19]. Compared with a random generation, the D algorithm has a higher fault cove makes it easier to find undetectable faults, and uses fewer patterns. However, the c lation process is complex, requires more resources, and takes a long time to generate B. Time-Varying Seed Pseudorandom Pattern Generation Algorithm Pseudorandom pattern generation in integrated circuits is often used in a bu self-test (BIST). As a common testability design method, it can significantly improv testability of random logic in circuits. The BIST often uses LFSR to constitute a pseudo dom sequence generation circuit [21], which has the advantages of a simple gener process and high coverage in a short time. The external exclusive-OR (XOR) LFSR s ture with a conventional length of L is shown in Figure 2. "0" indicates the signal with a normal value of 0 and fault value of 0. "1" indicates that the normal value is 1, and the fault value is also 1. "D" represents the signal with a normal value of 1 and fault value of 0, which can be recorded as 1/0. "D" represents a signal with a normal value of 0 and fault value of 1, which can be recorded as 0/1. "X" indicates that the value is undetermined. The D algorithm is also implemented in C language and consists of three steps: (a) Fault sensitization (backward): The influence of the fault can be reflected by driving the signal to be the opposite logical value of the fault. That is, activating the s-a-0 fault makes the input value of the line set to 1, whereas activating the s-a-1 fault makes the input value of the line set to 0. Thus, the logic value that the input can activate this fault is obtained through a further inverse calculation [17]. (b) Fault propagation (forward): The D signal is propagated to the output end of the circuit through one or more paths by setting the input node value of the relevant logic gates outside the fault point, so that it can be detected from the output end, and then inverse to improve the test pattern based on the hypothesis [18]. (c) Line confirmation: The undetermined signal value in the circuit is obtained using the existing node value until a set of non-contradictory values of the original input end of the circuit is obtained, which is a set of test patterns. When the selected value 0 or 1 is inconsistent with the previously assigned value of this node, it is necessary to go back to the previous node and select again [19]. Compared with a random generation, the D algorithm has a higher fault coverage, makes it easier to find undetectable faults, and uses fewer patterns. However, the calculation process is complex, requires more resources, and takes a long time to generate [20]. B. Time-Varying Seed Pseudorandom Pattern Generation Algorithm Pseudorandom pattern generation in integrated circuits is often used in a built-in self-test (BIST). As a common testability design method, it can significantly improve the testability of random logic in circuits. The BIST often uses LFSR to constitute a pseudorandom sequence generation circuit [21], which has the advantages of a simple generation process and high coverage in a short time. The external exclusive-OR (XOR) LFSR structure with a conventional length of L is shown in Figure 2. back constant CL is 0 or 1, and a single seed can generate a maximum of 2 patterns. To generate other patterns, the random seed or feedback constant needs to be changed, and the randomness has a certain space for improvement. This work uses C language to realize the algorithm flow. The algorithm generates a pseudorandom sequence as a test pattern and then conducts fault simulation on each pattern to identify all faults that can be detected. After the detected faults are deleted from the fault list, the next test pattern is generated to repeat the process. Because the random seed of the pseudorandom pattern generation method based on LFSR cannot be easily changed, the randomness needs to be improved. The algorithm uses the value of the system timer/counter to set the random seed so that the seed can change at any time and ensure that each pattern is generated by a different seed. By generating random numbers from 0 to 100 and judging whether they are greater than 50, 0/1 can be randomly generated by bits according to the required pattern length. Compared with LFSR in [13], the correlation between patterns is reduced, and the algorithm's randomness is enhanced. The algorithm flow is as follows: then seed ← time(((long *) NULL)) 3: else seed ← rand()% seed 4: srand (seed) 5: for K = 0 to max do 6: if rand()% 100 > 50 7: However, the generation of a pseudorandom pattern still has some problems, such as difficulty in finding undetectable faults, a long time to reach the target fault coverage, and a large number of required patterns, so the generation efficiency still has a large space to improve. C. Time-Varying Pseudorandom Disturbance ATPG Algorithm Based on the time advantage of the pseudorandom pattern generation algorithm and coverage advantage of the D algorithm, a time-varying pseudorandom disturbance ATPG algorithm is proposed in this work to improve the generation efficiency of the algorithm from two important evaluation indexes, i.e., fault coverage and generation time. Figure 3a shows the comparison of the variation trend of the fault coverage with the generation time between the pseudorandom generation algorithm and the D algorithm [24]. In Figure 2, a set of initial assignment values between (L−1) and 0 is called a set of random seeds. At present, the quantum random number generators (QRNGs) have been used to achieve complete random-number generation [22], but it is too complicated for ATPG technology. The simpler pseudorandom number generation method usually requires random seeds to generate a series of fixed pseudorandom numbers [23]. The feedback constant C L is 0 or 1, and a single seed can generate a maximum of 2 L patterns. To generate other patterns, the random seed or feedback constant needs to be changed, and the randomness has a certain space for improvement. This work uses C language to realize the algorithm flow. The algorithm generates a pseudorandom sequence as a test pattern and then conducts fault simulation on each pattern to identify all faults that can be detected. After the detected faults are deleted from the fault list, the next test pattern is generated to repeat the process. Because the random seed of the pseudorandom pattern generation method based on LFSR cannot be easily changed, the randomness needs to be improved. The algorithm uses the value of the system timer/counter to set the random seed so that the seed can change at any time and ensure that each pattern is generated by a different seed. By generating random numbers from 0 to 100 and judging whether they are greater than 50, 0/1 can be randomly generated by bits according to the required pattern length. Compared with LFSR in [13], the correlation between patterns is reduced, and the algorithm's randomness is enhanced. The generating flow is as Algorithms 1: Algorithms 1: Pseudorandom pattern generation. 1: if seed < 1000 2: then seed ← time(((long *) NULL)) 3: else seed ← rand()% seed 4: srand (seed) 5: for K = 0 to max do 6: if rand()% 100 > 50 7: then input[K] ← '1' 8: else input[K] ← '0' 9: end However, the generation of a pseudorandom pattern still has some problems, such as difficulty in finding undetectable faults, a long time to reach the target fault coverage, and a large number of required patterns, so the generation efficiency still has a large space to improve. C. Time-Varying Pseudorandom Disturbance ATPG Algorithm Based on the time advantage of the pseudorandom pattern generation algorithm and coverage advantage of the D algorithm, a time-varying pseudorandom disturbance ATPG algorithm is proposed in this work to improve the generation efficiency of the algorithm from two important evaluation indexes, i.e., fault coverage and generation time. Figure 3a shows the comparison of the variation trend of the fault coverage with the generation time between the pseudorandom generation algorithm and the D algorithm [24]. The early random patterns gradually slow down, which cannot be significantly improved even after a long time. However, the D algorithm always maintains a certain rate of coverage improvement, which is significantly higher than the generation of a pseudorandom pattern. This finding indicates that the algorithm can deal with many undetectable faults that cannot be found by the pseudorandom pattern. Based on the generation characteristics of the two kinds of pattern generation methods, the time-varying pseudorandom disturbance ATPG algorithm is designed. The variation trend of the algorithm fault coverage with the generation time is shown in Figure 3b. The generation algorithm adopts the pseudorandom method first and then the D algorithm. First, the pseudorandom pattern generation is used to achieve a high fault coverage in 5 s. After the curve slows down, the D algorithm deals with the remaining undetectable faults in the fault list. Thus, considerable fault coverage can be achieved in a short time, and the generation efficiency can be significantly improved. Parameters M, L, and S represent the running time of the two algorithms. When the same coverage is achieved, the improved algorithm in Figure 3b can make the total generation time of the algorithm represented by L + S smaller than the time M in Figure 3a [13]. In the pseudorandom pattern generation process, when a generated pattern cannot detect any fault in the fault list, it is regarded as an invalid pattern. Variable α is used to count the invalid pattern ratio to achieve the control of the algorithm switching timing, adjust the ratio occupied by the two methods, and improve the generation efficiency. When α ≥ n, the generation of the pseudorandom pattern ends, and the D algorithm is used to continue the generation process. The specific control mode is shown as follows: Algorithms 2: 1: while α < n do 2: Generate a random pattern 3: Simulate the pattern 4: if faults_count = 0 5: then P++ 6: α←P/Q 7: else Mark the fault as detected 8: end 9: Generate test patterns deterministically P represents the number of generated invalid patterns, and Q represents the total number of generated patterns. α is used as an invalid pattern ratio counter and represents the proportion of generated invalid patterns to the total number of generated patterns. Standard circuits S713, S1423, and S9234 are used for pseudorandom pattern generation, and the change in the fault coverage with n values is shown in Figure 4. It can be used to select the algorithm switching timing. The early random patterns gradually slow down, which cannot be significantly improved even after a long time. However, the D algorithm always maintains a certain rate of coverage improvement, which is significantly higher than the generation of a pseudorandom pattern. This finding indicates that the algorithm can deal with many undetectable faults that cannot be found by the pseudorandom pattern. Based on the generation characteristics of the two kinds of pattern generation methods, the time-varying pseudorandom disturbance ATPG algorithm is designed. The variation trend of the algorithm fault coverage with the generation time is shown in Figure 3b. The generation algorithm adopts the pseudorandom method first and then the D algorithm. First, the pseudorandom pattern generation is used to achieve a high fault coverage in 5 s. After the curve slows down, the D algorithm deals with the remaining undetectable faults in the fault list. Thus, considerable fault coverage can be achieved in a short time, and the generation efficiency can be significantly improved. Parameters M, L, and S represent the running time of the two algorithms. When the same coverage is achieved, the improved algorithm in Figure 3b can make the total generation time of the algorithm represented by L + S smaller than the time M in Figure 3a [13]. In the pseudorandom pattern generation process, when a generated pattern cannot detect any fault in the fault list, it is regarded as an invalid pattern. Variable α is used to count the invalid pattern ratio to achieve the control of the algorithm switching timing, adjust the ratio occupied by the two methods, and improve the generation efficiency. When α ≥ n, the generation of the pseudorandom pattern ends, and the D algorithm is used to continue the generation process. The specific control mode is shown as Algorithms 2: Algorithms 2: The control mode of the algorithm switching timing. 1: while α < n do 2: Generate a random pattern 3: Simulate the pattern 4: if faults_count = 0 5: then P++ 6: α←P/Q 7: else Mark the fault as detected 8: end 9: Generate test patterns deterministically P represents the number of generated invalid patterns, and Q represents the total number of generated patterns. α is used as an invalid pattern ratio counter and represents the proportion of generated invalid patterns to the total number of generated patterns. Standard circuits S713, S1423, and S9234 are used for pseudorandom pattern generation, and the change in the fault coverage with n values is shown in Figure 4. It can be used to select the algorithm switching timing. In this curve, it went smoothly and showed saturation, such that its derivative gra ually approaches 0. For n = 10%, the rate of derivative decrease significantly slows dow and the fault coverage of pseudorandom patterns has basically reached the highest a does not change significantly with n. It is difficult to effectively improve the fault covera by increasing the n value and has a limited influence on the overall coverage of the alg rithm. To balance the fault coverage and generation time and ensure that the highest fa coverage can be achieved within the shortest time, n = 30% was selected as the algorith switching point in this work. Compared with the switching logic of using timers to cont the generation time of pseudorandom patterns adopted in [13], this work achieved a r sonable control of specific switching timing by using the same variable n in view of actual pattern generation of different circuits. This method reduces blindness and mak the fault coverage of different circuits as consistent as possible at a certain switch po with the variation trend of variable n. Results and Discussion The logic of the algorithm testing is described in Figure 5. The application of AT to high-speed railway equipment requires the design of a corresponding golden mod The golden model is designed according to the equipment system, and it can be used obtain the netlist after a logical synthesis and inputted into the ATPG algorithm to gen ate patterns. The transmitter and receiver of the high-speed railway ZPW-2000A track circuit w designed to realize the modulation and demodulation functions of frequency-shifted F signals, respectively. First, the designed golden model netlist was used for the pattern generation (11 ga for the transmitter and 20 gates for the receiver), and the fault coverage can reach n 100% using the algorithm. As the large-scale circuit fault coverage cannot easily rea 100%, the medium-scale circuits S713 (393 gates) and S1423 (657 gates), and large-sc In this curve, it went smoothly and showed saturation, such that its derivative gradually approaches 0. For n = 10%, the rate of derivative decrease significantly slows down, and the fault coverage of pseudorandom patterns has basically reached the highest and does not change significantly with n. It is difficult to effectively improve the fault coverage by increasing the n value and has a limited influence on the overall coverage of the algorithm. To balance the fault coverage and generation time and ensure that the highest fault coverage can be achieved within the shortest time, n = 30% was selected as the algorithm switching point in this work. Compared with the switching logic of using timers to control the generation time of pseudorandom patterns adopted in [13], this work achieved a reasonable control of specific switching timing by using the same variable n in view of the actual pattern generation of different circuits. This method reduces blindness and makes the fault coverage of different circuits as consistent as possible at a certain switch point with the variation trend of variable n. Results and Discussion The logic of the algorithm testing is described in Figure 5. The application of ATPG to high-speed railway equipment requires the design of a corresponding golden model. The golden model is designed according to the equipment system, and it can be used to obtain the netlist after a logical synthesis and inputted into the ATPG algorithm to generate patterns. In this curve, it went smoothly and showed saturation, such that its derivative grad ually approaches 0. For n = 10%, the rate of derivative decrease significantly slows down and the fault coverage of pseudorandom patterns has basically reached the highest an does not change significantly with n. It is difficult to effectively improve the fault coverag by increasing the n value and has a limited influence on the overall coverage of the algo rithm. To balance the fault coverage and generation time and ensure that the highest fau coverage can be achieved within the shortest time, n = 30% was selected as the algorithm switching point in this work. Compared with the switching logic of using timers to contro the generation time of pseudorandom patterns adopted in [13], this work achieved a rea sonable control of specific switching timing by using the same variable n in view of th actual pattern generation of different circuits. This method reduces blindness and make the fault coverage of different circuits as consistent as possible at a certain switch poin with the variation trend of variable n. Results and Discussion The logic of the algorithm testing is described in Figure 5. The application of ATP to high-speed railway equipment requires the design of a corresponding golden mode The golden model is designed according to the equipment system, and it can be used t obtain the netlist after a logical synthesis and inputted into the ATPG algorithm to gene ate patterns. The transmitter and receiver of the high-speed railway ZPW-2000A track circuit wer designed to realize the modulation and demodulation functions of frequency-shifted FS signals, respectively. First, the designed golden model netlist was used for the pattern generation (11 gate for the transmitter and 20 gates for the receiver), and the fault coverage can reach nea 100% using the algorithm. As the large-scale circuit fault coverage cannot easily reac 100%, the medium-scale circuits S713 (393 gates) and S1423 (657 gates), and large-scal circuit S9234 (5597 gates) were tested. In the same base generation time, the algorithm using the pseudorandom pattern generation and D algorithm alone were compared wit The transmitter and receiver of the high-speed railway ZPW-2000A track circuit were designed to realize the modulation and demodulation functions of frequency-shifted FSK signals, respectively. First, the designed golden model netlist was used for the pattern generation (11 gates for the transmitter and 20 gates for the receiver), and the fault coverage can reach near 100% using the algorithm. As the large-scale circuit fault coverage cannot easily reach 100%, the medium-scale circuits S713 (393 gates) and S1423 (657 gates), and large-scale circuit S9234 (5597 gates) were tested. In the same base generation time, the algorithms using the pseudorandom pattern generation and D algorithm alone were compared with the disturbance algorithm. Due to the randomness in the algorithm, the fault coverage results generated by the algorithm are different each time. Take the multiple generation results of circuit S713 as an example, as shown in Figure 6. Micromachines 2022, 13, x 7 of 9 the disturbance algorithm. Due to the randomness in the algorithm, the fault coverage results generated by the algorithm are different each time. Take the multiple generation results of circuit S713 as an example, as shown in Figure 6. Figure 6. Results of multiple tests against the disturbance algorithm using S713. As can be seen from the figure, the results of multiple tests for the disturbance algorithm using S713 fluctuate within a certain range. Thus, five groups of repeated experiments were required to take the average value, and the number of generated patterns, detected faults, and fault coverage of each algorithm were counted, as shown in Table 1 and Figure 7. Figure 6. Results of multiple tests against the disturbance algorithm using S713. As can be seen from the figure, the results of multiple tests for the disturbance algorithm using S713 fluctuate within a certain range. Thus, five groups of repeated experiments were required to take the average value, and the number of generated patterns, detected faults, and fault coverage of each algorithm were counted, as shown in Table 1 and Figure 7. The generation results in Figure 7 show that in the same generation time, compared with the D algorithm, the pseudorandom pattern generation has the lowest fault coverage and requires more patterns. The number of detected faults and fault coverage reached the highest level after the combination of the two methods. The fault coverage increased by more than 50% and 30% compared with the independent method, so the efficiency of the algorithm was significantly improved. The generation results in Figure 7 show that in the same generation time, compared with the D algorithm, the pseudorandom pattern generation has the lowest fault coverage and requires more patterns. The number of detected faults and fault coverage reached the
7,161
2022-05-29T00:00:00.000
[ "Computer Science" ]
Segmentation of Fuzzy Enhanced Mammogram Mass Images by using K-Mean Clustering and Region Growing Providing intention to encourage radiologist’s appraisal for distinguishing proof or order of mammogram images, different methods were suggested by specialists since past two decades. By means of this technical paper, we propose segmentation on advanced mammogram imaging with k-means clustering and locale developing systems tending to support specialists or radiologists to figure out cancerous areas with computer-aided techniques. The suggested task is further classified within two stages: Applied/implemented preprocessing, at primary stage. With the pre-processing stage, we carried a median filter to expel undesirable salt and pepper clamor. Further, we apply fuzzy intensification operator (INT) to upgrade the distinction of intake images. During subsequent stage, improved fuzzy imaging conduces as input for k-mean clustering. Secondly, the locale developing technique is employed with previously generated clustered imagery to partition mammogram into homogeneous areas indicated through force from pixels. With the end goal of the experiment, we utilized the smaller than normal MAIS dataset. The experiment’s end result shows that proposed strategy accomplishes higher precision. Keywords—INT operator; feature extraction; k-mean clustering; mammogram; median filter; segmentation I. INTRODUCTION In today's time, Cancer (tumor) is major fatal diseases precisely made of widely different related ailments. In every single category about cancer expansion, body somatic (cell) starts to separate repeatedly additionally disseminate amongst encompassing tissue. Similarly, a breast carcinoma emerges from breast organ tissues. Breast comprises of billions of infinitesimal cells. These somatic would begin expanding compulsively which causes cancer of breasts. Cancer of breast growth may be partitioned into two sorts: Ductal Carcinomas and Lobular Carcinomas. Ductal carcinomas are highly prominent cancers that commences in mammary duct whereas lobular carcinomas are particularly ailments that grows around lobes [1]. Prior, the analysis and treatment end up with being ruinous without productive methods. At every stage of carcinoma, the demise rate including dynamism from this disease raises. To decrease the deaths rate and limit the dynamism elicits need for earlier breast cancer identification strategies. So, automated computerized detection is unavoidable. There exists none absolute reasons available for breast carcinoma, and we might see these as causative factors only. They were going to be hereditary or natural. Hereditary factors incorporate family ancestry, individual wellbeing history, menstrual and reproductive history, dense breast tissues, certain genome changes, age, sex etc. The ecological factors incorporate corpulence, under stellar eating ruts, liquor utilization, radiation, less physical activity, etc. [2]. The elementary factors of breast cancers are forming lumps. Reason for this is tiny sedimentation of calcium called micro calcification and tumors called circumscribed mass such tumors are often benign and not malignant. The benign tumors are generally non-aggressive and non-harmful. It does not disseminate to another body parts [3]. There are several distinctive imaging systems for earlier prediction of breast cancer. These incorporate MRI, X-Ray imaging, ultrasound imaging, computerized mammography, screening etc. The computerized mammograms are widely utilizes currently on account including favorable circumstances over others. X-Ray imaging is typically used for the benefit of discovering indications of carcinoma while utilizing mammograms that generally examines the problem. A mammogram uses X-rays to make breasts imaging [4]. Earlier there exist film mammograms with which images being stored on films, today computerized mammography are generally used that captures and stores straightforwardly on digital computer and every single corners and niches are visible for simple detection. In breasts ultrasound [5], the sound waves are used to form the images. However, currently this is not used since that's done with handheld devices. It would facilitate false positives as well as false negatives, if in this specific instance the person operating is not good grasp or skilled and thereby the quality about the images will differ. Point wise identification of the images gets done by applying feature extracting and texture extraction methods [6]. This alone open other research area, as broad assortment of strategies employed for segmentations, feature extraction, enhancements, done mostly by wavelet techniques [7], clustering using GLCM matrix [8] etc. specifically depicted plainly during relevant works. Therefore a definitive point from this survey is to furnish distinctive improvements, detection and classification approaches for quick recognizing of breast carcinoma. Several authors have published research papers for breast regions segmentation based on difference in density. For instance, Saidin et al. [9] proposed technique to perform breast segments into four regions: backgrounds, skin-air boundaries, fatty boundaries and pectoral muscles boundaries. www.ijacsa.thesai.org Karssemeijer [10] used approach for mammograms subdivisions towards three separate regions: breast tissues, pectoral muscles and backgrounds. Adel et al. [11] discuss way for breast regions segmentation with three distinct areas: pectoral muscles, fatty and fibro glandular regions by using Bayesian techniques with the adaption of Markov random field for region detection of various tissues on mammograms. El-Zaart et al. [12] provides mammograms segmented images with three regions, which are fibro glandular disc, breast regions and backgrounds. Camilus et al. [13] intend a technique for graph cuts computations seeking the pectoral muscles automatically. The intended work is divided within two phases. During its primary phase, we applied pre-processing. To be a part of preprocessing step, we had utilized median filter to expel undesirable salt and pepper commotion. We apply fuzzy intensification operator (INT) to strengthen the differentiation of information picture. In the subsequent level, enhanced effective fuzzy images used as input for k-means clustering. Besides, the algorithms for region growing should apply to earlier produce clustered images to divide mammograms within homogeneous parts (regions) in accordance with the relevant intensity of the pixels. Intending to the experiment we used the mini MAIS datasets. The trial result indicates this intended strategy accomplishes higher precision. The setup of this proposed paper is arranged as follows: Introduction is provided in Section I; Section II discusses substances/material and methods; Section III portrays the proposing (suggested) method; Section IV include outcomes (results) and discussion; conclusion and future work are provided with Section V; we end by references in Section VI. A. Database To test out proposed methodology we have use mini-MIAS dataset [14]. This dataset consist of 322 mammogram image. These images are kind of three separate classes. Among 322 images, 106 images belong to Fatty (F) class, another 104 images belongs to Fatty-Glandular (G) class and rest 112 images belong to Dense-Glandular (D) class images. All images have size of 1024 ×1024 pixels in PGM Format. Each pixel in the images is corresponded to the 8-bit word, where the images are in grayscale format with a pixel intensity of range [0, 255] [15]. B. Median Filter Median filter [16] recurrently used to overcome salt and paper noise while trimming down noise. It also preserves edges in image. For applying the median filter we have to select a square window of size 2*k+1, where k lies between 1 and N, around the considered pixel. After then arrange and sort all pixel values belongs into the square window and then calculate median value and replace median value with value considered pixel value. This process is repeated for every pixel in the image from left to right and top to bottom manner. Fig. 1 illustrates an example calculation. In this example we have select k=1; so window size will be 3*3. It can be seen that the central pixel value of 120 is rather unrepresentative of the surrounding pixels and is replaced with the median value: 99. A 3×3 square neighborhood is used here and larger neighborhoods will produce more severe smoothing. Image Enhancement using fuzzy intensification operator. C. Image Enhancement using Fuzzy Intensification Operator Image enhancement using fuzzy logic [17] is done by using Fuzzification which converts image form spatial domain to fuzzy domain, enhancement if fuzzy memebership value and Defuzzification which convert back original image from fuzzy domain to spatial domain. 1) Fuzzification: An image X of size M×N can be viewed as an array of fuzzy singleton by converting into fuzzy set notation using following formula: Here, where , ij  is a membership value that represents the amount of brightness acquired by the pixel intensity value , ij x at i th row and j th column in image [18]. 2) Modification of the membership function: The goal of our proposed method is to take care of the fuzzy nature of an image and formulate the contrast improvement more adjustable and valuable and to prevent from overenhancement/under-enhancement. So, we employ Adjustment of memberships function 350 | P a g e www.ijacsa.thesai.org It transforms the membership values that are above threshold value to much larger values and membership values that are below than threshold to much smaller values in a nonlinear manner to obtain an enhanced image. Otherwise, display the unenhanced image. Thus, the ultimate image will be contrast improved image [18]. D. Histogram based Self Initializing K-Means Clustering K-Mean clustering [20] is among the most applicable unsupervised learning for classification that needs manual initialization clusters count and primary centroids of each cluster. The automatic initialization of initial centroids can be achieved with uniform distribution of gray values of the normalized histogram. 1) Algorithm 1: K-Means clustering a) Input the image X and Initiate the number of clusters K. b) Calculate the histogram of X. Find out the smallest and largest histogram gray level respectively, say p and q. Based on the value of p and q, apply discrete uniform distribution to initiate the initial centroids j c where j=1, 2…K. The general formula for the probability density function (pdf) for the uniform distribution is: Allocate each pixel to the nearest class. This process is completed by minimizing the function J as given below. E. Region-based Segmentation RBS [21][22] is a method for finding out region on the basis of some region membership criteria directly. The basic steps for RBS are: a) Pick a set of seed points. Number of seed points depends on number of segments. Seed point selection is based on user choice or any random process. b) The process starts from the selected seed points and then regions are expanded from these points to its neighboring points depending on region relationship norms. The norms may be pixel intensity, pixel color or texture. c) Repeat step 2 for each of the recently added pixels; stop if no more pixels can be added. III. PROPOSED METHOD In this Article ornate (elaborate) the intended method working sensible to easily identify and segment the boundary of breast tissues area in mammograms image. Our suggested method deals via a pair of phases: The first step, preprocessing comprises of two sub section i.e. noise Removal and improvement in image contradictions. Median filter utilizes in removal of salt and paper noise and contrast of images strengthened from assistance of Fuzzy Pal and King Method. Second phase, the image splits in multiple clusters by applying K-mean clustering and appertaining to Seed Based Region Growing (SBRG) Technique, required appropriate cluster is segmented by selecting the seed/start point. a) Proposed Algorithm Step 1: Input the mammogram images. Step 2: Median filter is imposed thus removes various noises from imaging. Step 3: Change images (getting from step 2) from spatial domain to fuzzy area and subsequently applying the pal king function, as demanded INT operator on fuzzy domain that modifies fuzzy area for low contrast area to upper contrast area and moreover lastly(modified) fuzzy area transformed into intensified image using Defuzzification process. Step 4: Perform the Histogram based Self Initializing K-Means Clustering algorithm that partition the images in different number of clusters. The count of clusters is mentioned before processing. Step 5: Apply SBRG methodology which segments the appropriate region by performing region growing from seed/start points to adjoining points in concordance with region membership criterion. IV. EXPERIMENTS AND RESULT Thirty-one MIAS images brought into play the experiment. 15 together were cancerous images and 16 were normal images. Regarding this the proposed algorithm, referring to 15 cancerous images, 12 patients detected correctly and remaining 3 images, the cancerous region including tumor found incorrect. When the proposed methodology applied to remaining 16 patients on normal images, the significant cancer tissues was determined in 2 images, and in case of 14 images, the algorithm determines that these images does not contain carcinoma. Fig. 2 shows the result of proposed method against the various mammogram images. Fig. 3(a) depicts Original image, Fig. 3(b) shows Mammography image after preprocessing step, Enhanced Mammography image after fuzzy INT operation is shown in Fig. 3(c). Fig. 3(d) depicts segmented image after K-mean clustering and finally Fig. 3(e) shows segment extracted by using region growing technique. The MCC anatomizes correlation coefficient within observed and predicted classification and described as: The F-measure value separate from 0 to + 1 and MCC value lies from −1 to +1. Higher value measurement of both F-measure and MCC specify higher classification quality. V. CONCLUSION For radiologist's consideration to easily identification and categorization of concerning mammography images, different methods by researchers sustained bearing in mind since past decades. By means of this research report, we aims mammography image segmentation with k-means clustering and region growing techniques benefit to experts or radiologists to detect cancerous regions with computer-aids. The intended work carve up within two stages: At start stage, pre-processing employed, we retained a median filter that expel undesirable salt and pepper noise. Further, taken pal king approach that reinforce input images contrast. In subsequent stage, more effective image is exerted as an input for k-mean clustering. Image is divided in order to get multiple clusters and with Seed Based Region Growing (SBRG) Technique, required appropriate cluster is segmented by selecting the seed point. For achieving rationale behind this experiment, we are using mini MAIS dataset. The end outcome of experiments demonstrates that proposed strategy accomplishes higher precision.
3,178
2020-01-01T00:00:00.000
[ "Computer Science" ]
SEROLOGICAL EVIDENCE OF EXPOSURE TO Ehrlichia canis IN CATS EVIDÊNCIAS SOROLÓGICAS DA EXPOSIÇÃO DE GATOS A Ehrlichia canis The aim of the present study was to estimate the occurrence of Ehrlichia canis in cats from the semiarid region of Northeast of Brazil. Sera of 101 healthy cats were submitted by Indirect Immunofluorescence Assay (IFA), and considered positive when antibody titers ≥ 40 were obtained. Seroprevalence of 35.6% (36/101) was found, with the following titers: 40 (15 animals); 160 (6); 320 (1); 640 (3), and 2,560 (11). No statistical differences were observed when comparing county of origin, gender, age, breed, and modus vivendi (pet and stray cats), and no ticks were observed in any of the cats. This study revealed exposure to E. canis in cats of the Semiarid Northeast of Brazil. Introduction Ehrlichia sp. is a gram-negative, pleomorphic, obligate intracellular bacteria, belonging to Anaplasmataceae family, Rickettsiales order, that affects leukocytes and thrombocytes, and potentially infects a large variety of mammal species (1) .Transmission to the host occurs predominantly via vector through the bite of an infected tick, an event that can be attributed to the high prevalence of ehrlichiosis in tropical and subtropical regions, due to the geographical distribution of vectors associated to transmissibility (2) . Indirect Immunofluorescence Assay (IFA) is traditionally used for the diagnosis of human and canine ehrlichiosis (3) , and can also be used for diagnosis of feline ehrlichiosis (4) . The first evidence for naturally occurring ehrlichiosis in cats was provided in 1986 by Charpentier and Groulade in France (5) .In Brazil, the first report was in 1998, by hematoscopy, through morulae observation of Ehrlichia sp. in leukocytes of a cat with clinical signs similar to those described in dogs with this disease (6) . Rhipicephalus sanguineus is the most important tick that parasitizes dogs in Brazil (17) .Although rarely found in cats, this tick is seen as the main vector of feline ehrlichiosis (18) .Predominant signs reported in cats include fever, anorexia, and lethargy.Myalgia, dyspnea, anemia, thrombocytopenia, and pancytopenia have been described as well.However, in most cases, the cats are asymptomatic (8,19) . Little is known about the exposure of the Brazilian feline population to E. canis because most studies focus on canine infections (20) .The present study aimed to verify the occurrence of anti-E.canis antibodies in cats and the possible associated risk factors in the counties of Juazeiro, Bahia State (BA), and Petrolina, Pernambuco State (PE), located in the middle region of the São Francisco Valley, Semiarid of Northeast of Brazil, belonging to Petrolina-Juazeiro Pole of the Integrated Network for Economic Development. From September 2012 to July 2013, convenience sampling was conducted, corresponding to 101 clinically healthy cats without restriction of age, breed or sex, originated from routine vaccines or other periodic control practices of a veterinary clinic, and animals from the Center of Zoonosis Control of both counties. Peripheral blood samples were collected from the jugular, left or right cephalic vein, in dried tubes, properly identified, then centrifuged at 3,500 rpm for 10 min to obtain the serum, within 24 hours after collected.Sera were stored in 1.5 mL microtubes at -20 °C until examination.Information about county of origin, sex, age, breed, and modus vivendi (pet and stray cat) were obtained.The samples were collected following the ethical standards of animal experimentation established by the Committee of Ethics and Deontology Studies and Research of the Federal University of São Francisco Valley (protocol number 11/161012). The occurrence of anti-E.canisIgG antibodies was assessed by IFA, using E. canis strain Cuiabá 16 (21) as antigen with the cut-off point at an initial dilution of 1:40 (9,22,23) .Commercial fluorescein isothiocyanate-conjugated anti-cat IgG (Sigma-Aldrich, USA) was used as conjugate at a dilution of 1:1000, and the antigen preparation and IFA technique were performed as previously described (24) .Both positive and negative control sera were included in each assay. Statistical evaluation was carried out by the Chi-Square test (X 2 ) or Fischer's exact test with a 95% confidence interval. Discussion IFA results obtained in the present study are in accordance with Braga et al. (23) , who found seropositivity of 41.5% to E. canis (cut-off 1:40) among 93 pet cats from Cuiabá, Mato Grosso State (tropical wet and dry climate and Cerrado biome region).This similarity suggests the dissemination of the agent within the host population may be associated with abiotic factors (climate), whereas the higher temperature favors the dispersion of ticks and, hence, the vector-borne diseases (2,25) .Braga et al. (13) found 5.5% of individuals reagent (cut-off 1:64) to the same agent among 200 pet cats in São Luís, Maranhão State (tropical monsoon climate region, palm tree forest biome).This prevalence is in agreement with the canine ehrlichiosis profile at the locality (20) , indicating that the epidemiological behavior of feline ehrlichiosis is similar to the canine ehrlichiosis in that region, as described in Cuiabá by Braga et al. (14) .We found the same association comparing seropositivity in cats (present study) and dogs (20) at matching geographic location.Related epidemiological profile between cats and dogs was also found in Central Italy (22) and Portugal (16,26,27) .This association suggests once more that the major factor implicated in ehrlichiosis prevalence is that associated to the vector biology in a tick-free environment rather than host factors, including host species (28) . However, other reports found lower rates in cats than those detected in dogs, conjecturing that cats might be more resistant to ehrlichiosis (29) , or cats are infected less frequently as they remove ticks during self-cleaning (22) .André et al. (30) detected positivity of 21% in Ehrlichia spp. in wild captive felids in Brazil, among which 52% contained ehrlichial DNA closely related to E. chaffeensis, a species associated to human monocytic ehrlichiosis (31) . None of the cats were parasitized by ticks at the time of blood collection; although, some owners have reported the presence of ticks on their animals elsewhere.Exposure to ticks has been reported in about 30% of feline ehrlichiosis cases (32) .In the present survey, information about vector exposure is not complete.This lack of data precludes definitive assertions regarding this association. None of cats in the present study presented clinical signs, such as those found by Correa et al. (33) .It is known that clinical cases of ehrlichiosis in these animals are not common (8) .This aspect suggests that the felines might act as potential asymptomatic reservoirs and sentinels of E. canis and other vectorborne agents (16,33) .No statistical differences were observed when comparing the presence of antibodies in cats according to the county of origin, gender, age, breed, and modus vivendi, similarly to previous reports (14,22) .However, the high frequency of positive adult and stray cats demonstrated in this work should be explained by the higher time of exposure and higher possibility of contact with the vectors, respectively. Conclusion These results revealed the cats of Semiarid Northeast of Brazil were exposed to E. canis.Considering this fact, adopting effective prophylaxis and control measures against the vector becomes necessary to prevent infection of cats by E. canis and other vector-borne diseases, as well as their potential transmission to other animal species and to human beings.Further investigations about E. canis and other vector-borne agents are needed to better understand FONTALVO, M.C. et al and define the role of cats in the epidemiology of ehrlichiosis.
1,696.8
2016-07-29T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
MEASUREMENT OF SUSTAINABLE DEVELOPMENT WITH ELECTRIFICATION OF HOUSEHOLDS IN INDONESIA This study offers sustainable development measurement using three variables, namely; The human development index (HDI) represents sustainable social economic development, and the environmental quality (EQ) represents Environmental Sustainability, while the exogenous variable is household electrification (EoH). With analysis using structural equation modeling, the results showed; EoH positively and significantly correlated to HDI. EoH is negatively correlated and significant to EQ. HDI significantly negatively correlated with EQ. Electrification of Households causes the occurrence of sustainable social economic development, and vice versa, the electrification of households causes the occurrence of environmental sustainability, and the relationship of sustainable social economic development causes the occurrence of environmental sustainability. Research novelty is the role of moderation from EoH to the relationship between HDI and EQ so that provinces with low household electrification with provinces with high household electrification will differ in environmental damage due to sustainable social economic development. Reference for policy makers to replace fossil fuel power plants that supply the electricity in households with environmentally friendly power plants. INTRODUCTION Access to electricity is an essential driver of economic development. Cheap and easy-to-obtain electricity is crucial for households and development progress. Meanwhile, in large part, Indonesia's major power plants are produced through fossil fuels, which causes a heavy ecological burden. Households in Indonesia unevenly supplied electricity. There are areas where 33% of households with electricity, and there are areas where 99% of households with electricity. The conception of three pillars of sustainability (social, economic, environmental) represented by three circles that intersect with overall sustainability at the center of the slice of the three rings is an attempt to reconcile economic growth as a solution to social and ecological problems (Purvis et al., 2019) The primary purpose of the research is to measure sustainable development with the Electrification of Households in Indonesia. The research's novelty is to analyze the role of household electrification (EoH) moderation in the relationship between HDI and environmental quality (EQ). This research focuses on thirty-three provinces in Indonesia with 9 years of data, namely from 2011 to 2019. To measure the achievement of sustainable development, environmental dimensions, economic dimensions, and social dimensions can be used as environmental sustainability index (EQI) and human development index (HDI) (Strezov et al., 2017). Increased government spending on electricity supply infrastructure impacts improving the human development index (Sulistyowati et al., 2017). The opposite relationship also shows the same, as conveyed by (Sarkodie and Adams, 2020), that the human development index positively impacts electricity access. According to (Caraballo Pou and Simón, 2017), there is a positive impact of renewable and renewable energy consumption on the human development index. The effect of reducing renewable energy on CO 2 emissions is less than the pollution effect of unrenewable energy. There is a difference between developed and developing countries with the relationship between electricity consumption per capita and the human development index (Nadimi and Tokimatsu, 2018). The growth of household electricity consumption impacts sustainable economic and environmental development. Climatic conditions have a tremendous impact on household electricity consumption and should be a significant consideration in making different household electricity policies (Meng et al., 2019). From existing studies, research gaps on the relationship of human development index (HDI) to environmental quality (EQ) founding in different research results. The results of empirically proven studies that there is a positive and significant relationship between HDI and EQ are, among others, conveyed by (Shanty et al., 2018) that human quality positively affects environmental quality. The Human development index, positively related to the environmental performance index, explained that the higher accumulation of human resources would lower environmental damage and better environmental performance (Jain and Nagpal, 2019). Education is proving to reduce emissions (Balaguer and Cantavella, 2018). The results of different researches were presented by (Hickel, 2020) states that countries with high human development indexes (HDI) also contribute the most to climate change and other ecological damage forms. (Syaifudin and Wu, 2020), emphasize short-term goals by focusing on economic and social aspects and ignoring environmental elements; Increased education level has further compensated for the increase in CO 2 emissions per capita from economic growth. There is a U-shaped relationship between real income and ecological footprint (Destek et al., 2018). Finally, the study results can provide useful references to measure a country's sustainable development in a fast and straightforward way and help policymakers design and plan development on the right path of sustainable development. RESEARCH FRAMEWORK This study has successfully produced a report on the results of the analysis relationship between household electrification (EoH) and sustainable social economic development (HDI) as well as the relationship between household electrification (EoH) and environmental quality (EQ). This paper continued by analyzing sustainable development's measurement by analyzing the relationship of achievement of sustainable social economic development (HDI) to environmental quality (EQ). Literature Studies presents a summary of the concepts of sustainable development and indicators adapted from previous studies. The proposed concept of the sustainable development measurement model and testing hypotheses. Here is an overview of the methodology-first, the collection of analyzed indicators data, then analyzed by structural equation modeling (SEM) using Warp Partial Least Squares WarpPLS-SEM (WarpPLS) stable and reliable results. Environmental Quality (EQ) Based on data from the Indonesian Ministry of Environment and Forestry, the National Medium-Term Development Plan (RPJMN) year 2015 to 2019 that environmental quality management policy direct at improving the Environmental Quality Index that reflects water quality conditions, air, and land cover, which strengthenest by increasing the capacity of environmental management and environmental law enforcement. Land cover quality index (LCQ) The land cover quality index (LCQ) improves the forest cover index (FCI) used before 2015. Improvement of the LCQ calculation method elaborates several key parameters that describe conservation aspects, rehabilitation aspects, and spatial rust rural areas but can present and easily be understood. Land cover quality index data in thirty-three provinces from 2011 to 2019 in Table 1 the following: Environmental quality index (EQI) The environmental quality index (EQI) has been developing since 2009, a national environmental management performance index and a standard reference for all parties in measuring environmental protection and management performance. EQI data in thirty-three provinces in Indonesia, from 2011 to 2019 in Table 2 the following: Air quality index Air pollution is one of the problems faced by some regions of the world and is no exception in Indonesia. The trend of declining air quality in several major cities in Indonesia in recent decades. Also, the need for transportation and energy is increasing in line with the increasing population. Increased transportation and energy consumption will increase air pollution that will impact human health and the environment. Air quality index (AQI) data in thirty-three provinces in Indonesia, from 2011 to 2019 in Table 3 the following: Human Development Index According to the explanation of Indonesia's Central Statistics Agency (BPS), HDI is an important indicator to measure success in building people's quality of life. HDI can determine the rating or level of development of a region (Province). For Indonesia, HDI is strategic data because, in addition to being a measure of government performance. Mean years school Mean years school (MYS) defines the number of years used by the population to undergo formal education. MYS data in thirtythree provinces in Indonesia from 2011 to 2019 in Table 4 is the following: Old school expectations The old school expectation figure (HLS) defines the length of education in school (in years) that the child expects to be felting at a certain age in the future. OSE data in thirty-three provinces in Indonesia from 2011 to 2019 in Table 5 is the following: Life expectancy at birth Life expectancy at birth (LEB) defines as the approximate average age of a person from birth. LEB data in thirty-three provinces in Indonesia from 2011 to 2019 in Table 6: Adjusted per capita expenditure (ACE) Adjusted per capita expenditure (ACE) defines Per capita expenditure of constant or real food commodities and non-food. ACE data in thirty-three provinces in Indonesia from 2011 to 2019 in Table 7 is the following: Household Electrification (EoH) The percentage of Households with PLN Electricity Lighting Source is; sources of electrical lighting in households managed by the state electricity company (PLN). EoH data in thirty-three provinces in Indonesia from 2011 to 2019 in Table 8: RESEARCH METHOD This research uses multivariate statistical methods, with structural equation modeling (SEM). Research involving multivariable analyses is worth doing multivariate analysis if the variables are observed in unison or simultaneously conducted the study. Data analysis is done simultaneously on research in which variables are interconnecting, both theoretically and empirically. In the process of multivariate analysis, the relationship between variables included in the calculation process. Interpretation of the analysis results made comprehensively, and this is in harmony with the nature that multivariate analysis already considers the relationship between variables. Using variance-based and factor-based structural equation models (SEM), using the least-squares and factor-based methods. (Kock, 2015c) (Kock, 2015a). There is a ten model fit and quality index (Kock, 2010) (Kock, 2014) (Kock, 2015d), as follows ( Table 9): Based on WarpPLS User Manual Version 7.0; • For APC, ARS, and AARS, this P-value computing through a process that involves estimating resampling plus a correction to counteract the standard error compression effect associated with adding a random variable, in a way analogous to Bonferroni corrections • It is recommending (ideally) that AVIF and AFVIF be equal to or lower than 3.3, especially in models where most variables are measuring through two or more indicators. A looser (acceptable) criterion is that both indexes are equal to or lower than 5, especially in models where most variables are single indicator variables (and thus not latent variables real) • GoF. Like ARS, the GoF index, which refers to the Tenenhaus GoF in honor of Michel Tenenhaus, is a measure of the model's explanatory power (Kock, 2015d). (Tenenhaus et al., 2005) define GoF as the intermediate product's square root to which they refer to the mean commonality index and the ARS • The SPR index is a measure that the model's extent does not depend on the example of Simpson's paradox (Kock, 2015b) (Kock and Gaskins, 2016). An example of the Simpson paradox occurs when the path coefficients and correlations associated with a pair of related variables have different signs. Ideally, the SPR should be equal to 1, which means that there are no examples of Simpson's paradox in the model; an acceptable SPR value is equal to or >0.7, which means that at least 70% of the paths in the model is free of the Simpson paradox • RSCR index is a measure to analyze the extent to which the model is free from negative R-squared contributions, which occurs together with the example of Simpson's paradox. When the predictor's latent variable makes a negative contribution to the R-squared of the criterion latent variable (note: the predictor points to the criterion), it means that the predictor reduces the percentage of variance described in the criterion. Such a deduction takes into account the contributions of all predictors plus the remainder. This index is similar to SPR. Ideally, the RSCR should be equal to 1, meaning no negative R-squared contribution in the model. The acceptable value of the RSCR is equal to or >0.9, which means that the sum of the positive R-squared contributions in the model makes up at least 90% of the total sum of the absolute R-squared contributions in the model • The SSR index is a measure of how a model is independent of statistical emphasis examples. An example of statistical emphasis occurs when the path coefficient is more significant in absolute terms than the associated correlation concerning a pair of related variables. Like the Simpson paradox example, an example of statistical emphasis is a possible indication of a causality problem, suggesting that the hypothesized pathway may be unreasonable or reversed. The acceptable SSR value is equal to or >0.7, which means that at least 70% of the model's paths are free from statistical suppression • NLBCDR. One of the exciting properties of nonlinear algorithms is that the nonlinear bivariate association's coefficient varies depending on the hypothesized causality direction. They tend to be stronger in one direction than the other, meaning that the residuals (or errors) are larger when the direction of causality in one way or another. It can be used, along with other coefficients, as partial evidence supporting or against a hypothesized causal relationship. The acceptable value of the NLBCDR is equal to or >0.7, which means that in at least 70% of the path-related examples in the model, support for the hypothesized reverse causality direction is weak or less. Hypotheses The research hypothesis consists of four hypotheses and tested based on the design of research objectives. The hypotheses are; • H1: Household electrification (EoH) has a positive effect on the human development index (HDI) • H2: Household electrification (EoH) negatively affects environmental quality (EQ) • H3: Human development Index (HDI) negatively affects environmental quality (EQ) • H4: Household electrification (EoH) moderates the relationship between the human development index (HDI) and environmental quality (EQ). Research Model Pathway Analysis In Figure 1, there is a research model pathway analysis. Results of the Analysis of the Research Model Path In Figure 2, there is a result of the analysis of the research model path. Analysis Results Model fit and Quality Index In Table 10, there is Analysis Results Model fit and quality index. Path Coefficients and P values In Table 11, results from path coefficients. Table: P-values In Table 12, results from P-value. Combined Loadings and Cross-loadings The result combined loadings and cross-loadings in Table 13. Indicator Weights The result of Indicator weights, in Table 14. Figure 3: Best-fitting curve and data points for a multivariate relationship between EoH with HDI the relationship between HDI and EQ so that provinces with low Household Electrification (Low EoH) with Provinces with high household electrification (High EoH) will differ in environmental damage due to sustainable social economic development (HDI) (Path coefficients=0.164, P=0.002). Latent Variable Coefficients The provinces with low household electrification (Low EoH) experience a turning point in the Value EQ=68.88, While the Provinces with high Household Electrification (High EoH), the more significant the HDI value down the EQ value at the EQ value point=35.66. CONCLUSIONS AND POLICY IMPLICATIONS Sustainable development measurement with the model we offer, using three variables; The human development index (HDI) represents Sustainable Socioeconomic Development, environmental quality (EQ) represents environmental sustainability and household electrification (EoH), strong and fast enough to measure whether development in a country is already on track for sustainable development. By analyzing the relationship between household electrification (EoH) to the human development index (HDI) and analyzing the relationship between Household electrification (EoH) to environmental quality (EQ). The subsequent analysis analyzed the human development index (HDI) relationship to environmental quality (EQ). The last analysis conducted was to analyze the moderation role of household electrification (EoH) to the relationship between human development index (HDI) and environmental quality (EQ), so that the difference in environmental quality value (EQ) between provinces with low household electrification (Low EoH) with provinces with high Household Electrification (High EoH). The research's policy implications are a dilemma for Indonesia's Government due to the development priorities that plan and applied. On the one hand, Household Electrification for the community is a fundamental need to create development. On the other hand, Household Electrification causes a decrease in Environmental Quality. We recommend to the Indonesian government to replace fossil fuel power plants that supply electricity to households with environmentally friendly power plants, and this can be done in stages so that people's needs for electricity Fulfilled.
3,641.4
2021-04-10T00:00:00.000
[ "Environmental Science", "Economics" ]
Muscle Wasting and Sarcopenia in Heart Failure—The Current State of Science Sarcopenia is primarily characterized by skeletal muscle disturbances such as loss of muscle mass, quality, strength, and physical performance. It is commonly seen in elderly patients with chronic diseases. The prevalence of sarcopenia in chronic heart failure (HF) patients amounts to up to 20% and may progress into cardiac cachexia. Muscle wasting is a strong predictor of frailty and reduced survival in HF patients. Despite many different techniques and clinical tests, there is still no broadly available gold standard for the diagnosis of sarcopenia. Resistance exercise and nutritional supplementation represent the currently most used strategies against wasting disorders. Ongoing research is investigating skeletal muscle mitochondrial dysfunction as a new possible target for pharmacological compounds. Novel agents such as synthetic ghrelin and selective androgen receptor modulators (SARMs) seem promising in counteracting muscle abnormalities but their effectiveness in HF patients has not been assessed yet. In the last decades, many advances have been accomplished but sarcopenia remains an underdiagnosed pathology and more efforts are needed to find an efficacious therapeutic plan. The purpose of this review is to illustrate the current knowledge in terms of pathogenesis, diagnosis, and treatment of sarcopenia in order to provide a better understanding of wasting disorders occurring in chronic heart failure. Introduction Sarcopenia is defined as a diminished muscle strength, rooted in a reduction of muscle quantity and quality, often associated with reduced physical performance, according to the new definition of the European Working Group on Sarcopenia in Older People [1]. Since muscle mass, strength, and function are strongly influenced by demographic and anthropometric features [2], worldwide uniformed threshold values have not been established yet. This limitation, in conjunction with other definitions adopted, leads inevitably to incongruities in the assessment of sarcopenia among different populations [3] (Table 1). Sarcopenia is commonly observed in older patients, with a prevalence between 10 and 40%, depending on the definition used and the age range used in the studies [4]. The percentage of muscle mass loss progressively increases over the years, starting from the 5th decade with 1%/year and reaching up to 50% by the 8th-9th decade of life [5]. Interestingly, a recent meta-analysis of 41 studies and 34,955 participants showed that the prevalence of sarcopenia in nursing home individuals in the included studies were much higher (51% (95% CI: 37-66%) in men and 31% (95% CI: 22-42%) in women) compared to community-dwelling individuals (11% (95% CI: 8-13%) in men and 9% (95% CI: 7-11%) in women), possibly due to lower activity levels in nursing homes [6]. Recent evidence suggests that a dysregulation of immunosenesence and low-grade progredient inflammatory response in elderly persons (inflammageing) [7,8] may be involved in the development of sarcopenia [9,10]. Diet and physical activity have been associated with inflammatory activation in age-related sarcopenia [11]. In addition, epigenetic mechanisms may be involved in age-related muscular changes [12]-a study comparing blood DNA methylation in sarcopenic and non-sarcopenic old women (>65 years) reported a lower methylation of differentially methylated cytosin-phosphate-guanine sites (dmCpGs) related to Kyoto Encyclopedia of Genes and Genomes (KEGG) signaling pathways associated with muscle function and energy metabolism in the sarcopenic group (p = 0.004), suggesting that these processes might be epigenetically altered in ageing sarcopenia. Hypermethylated promoter regions of genes associated with metabolism in the sarcopenic group also indicate a possible suppression of cellular energy regulation in these subjects. Muscle wasting represents a major risk factor for decreased muscular resistance [13] and loss of independency in daily life activities (19.6% vs. 13.8% of dependency, sarcopenia vs. non-sarcopenia, respectively, p < 0.001) [14]. In a recent meta-analysis using 33 studies with more than 45,000 individuals, it was shown that sarcopenia was significantly associated with bone fractures. Sarcopenic individuals had a significant higher risk of falls (cross-sectional studies: Odds Ratio ( [15]. A study with 4452 disability-free adults aged ≥65 years investigating disability in sarcopenia (mean follow-up 30 months) found that compared to non-sarcopenia, individuals with sarcopenia or low serum albumin alone had an increased risk of disability (Hazard ratio (HR): 2.74, 95% CI: 1.58-4.77, and HR: 1.71, 95% CI: 1.26-2.33, respectively), which was further increased in the groups that had both sarcopenia and low serum albumin (HR: 3.73, 95% CI: 1.87-7.44) [16]. A prospective cohort of 534 individuals (73.5 ± 6.2 years, 60.5% female) [17] showed a higher mortality (16.2% vs. 4.6%, p < 0.001) of individuals diagnosed with sarcopenia than of those who were not diagnosed after 3 years, if no association between baseline sarcopenia and physical disabilities or institutionalizations was highlighted [18]. A small study comparing 30 sarcopenic vs. 30 control individuals (77 ± 6 years, and 58% females) showed that sarcopenia may be associated with reduced diaphragmatic muscle thickness and respiratory functions. The correct assessment of sarcopenia still represents a challenge for clinicians. Whether dual-energy X-ray absorptiometry (DXA) scan should represent the current reference standard for the skeletal muscle measurement is still a matter of debate [19,20]. High costs and scarce availability of this technique have led to the search for alternatives. The recent development of the D3-diluition method [21,22] with high reproducibility and minimized invasiveness has accomplished promising results in the estimation of skeletal muscle mass but its adoption in the clinical setting as a routine method remains to be implemented. A robust panel of biomarkers to detect the first signs of muscular degradation has not been established yet. Another frequently seen co-morbidity in these patients is cachexia, which is also in itself often accompanied by reduced hand grip strength and/or low walking speed [23], as well as worse performance in the short physical performance battery test [24]. However, a lack of uniform reference values for sarcopenic patients in these tests strongly demand a standardization in the clinical assessment of sarcopenia [25]. Recently, in a study based on 469,830 UK Biobank participants, associations of sarcopenia with adverse outcomes (all-cause mortality, incidence and mortality from cardiovascular disease (CVD), respiratory disease, and chronic obstructive pulmonary disease (COPD)) were strongest when sarcopenia was defined as slow gait speed plus low muscle mass, followed by severe sarcopenia, strongly suggesting that this combination of physical capability markers should still be considered in the diagnosis of sarcopenia [26]. The Asian Working Group for Sarcopenia studied the prevalence of 2061 older community residents (>60 years of age) [27]. Comparing the AWGS2014 algorithm to the revised AWGS2019 algorithm [28] (slow gait speed cut-off at <1 m/s and prolonged five-time chair-stand time (≥12 s)), the authors identified 60 and 89 individuals with sarcopenia, respectively. Interestingly, the authors found a linear correlation between the severity of sarcopenia and carotid intima-media thickness (no sarcopenia: 0.94 ± 0.31, sarcopenia: 1.04 ± 0.41, and severe sarcopenia: 1.07 ± 0.55 mm, p = 0.003), which could be used as a new marker [29]. High levels of homocysteine (OR: 1.9, 95% CI: 1.0-3.6) and high sensitive C-reactive protein (hsCRP) (OR: 3.9, 95% CI: 2.2-6.9) were independently associated with sarcopenia in data of 1582 participants, with stronger correlations seen in women [30]. Sarcopenia can be a modifiable condition. A multimodal approach, based on physical activity [31,32] and dietary recommendation [33], seems currently to be the most effective strategy to counteract progressive age-dependent muscle impairments and improve quality of life as well as life expectancy. Recent evidence suggests that a protein intake above 1-1.5 g/kg/day may positively influence the anabolic-catabolic imbalance in subjects suffering from sarcopenia [34]. An association of dietary habits (7-day food record) in 254 men with a mean age of 71 at baseline with the prevalence of sarcopenia 16 years later was described [35]. A healthy dietary pattern based on the dietary guidelines defined by the WHO tended to protect against the development of sarcopenia over 16 years. In particular, the authors found indications that increased adherence to a Mediterranean dietary pattern might be advantageous. The authors of a recent review suggest that elderly individuals with sarcopenia should eat at least three servings of fish a week to reach the minimal daily intake of 4-4.59 g of omega 3, reaching the 50% of recommended daily allowance (RDA) in vitamin E and D. High biological value of proteins in 150 g of fish and its high available magnesium (20% of RDA in 150 g of fish) suggest fish as a "functional food" in sarcopenia [36]. It has been shown that the combination of malnutrition and sarcopenia showed a synergistically accumulated risk for death in a prospective analysis of 427 hospitalized old adults over 80 years [37]. A metabolic signature has been described in a cohort of 189 sarcopenic individuals in which levels of essential amino acids including lysine, methionine, phenylalanine, threonine, as well as branched-chain amino acids and choline were inversely correlated with sarcopenia. Furthermore, nicotine metabolites (cotinine and trans-3 -hydroxycotine) and vitamin B6 status were linked to one or more clinical and functional measures of sarcopenia [38]. Other studies are investigating the molecular mechanisms involved in mitochondrial function [39] that might be relevant for muscle homeostasis in older age and could represent a new target for pharmaceutical interventions. Recent findings in older mice attribute a certain importance to peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α), whose expression is enhanced by physical exercise, leading to increased oxidative phosphorylation (OXPHOS) protein levels in mitochondria beyond levels induced by exercise in wild type mice, while a muscle-specific PGC-1α knockout resulted in blunting the exercise-controlled increase in OXPHOS proteins [40]. A recent publication shows that humans with sarcopenia, independently of their ethnicity, reproducibly exhibit a prominent transcriptional signature of mitochondrial bioenergetic dysfunction as evidenced by low PGC-1α/ERRα signaling and downregulation of mitochondrial proteostasis genes. These changes result in fewer mitochondria, reduction of respiratory complex expression and activity, as well as low nicotinamide adenine dinucleotide (NAD+) levels due to its disturbed biosynthesis [41]. The protein kinase mechanistic target of rapamycin (mTOR) is also a crucial modulator for cell growth and its loss in skeletal muscles has been recently investigated in knockout mouse models, suggesting that mTOR activity is essential for the regulation of peroxisome proliferator-activated receptor (PPAR) and PPAR-gamma coactivator 1-alpha (PPAR/PGC-1α)-mediated OXPHOS capacity in vivo [42]. Furthermore, a mutant mTOR lacking the kinase activity induces robust suppression of postnatal muscle mammalian target of rapamycin complex 1 (mTORC1) signaling [42], demonstrating damaging effects of mTOR mutations in muscle metabolism. Surprisingly, mTORC1 is hyperactivated in sarcopenic muscle and a partial inhibition by novel compound (RAD001) resulted in an attenuation of sarcopenia shown by increased muscle mass and fiber type cross-sectional area, as well as downregulation of several genes associated with senescence. Hence, RAD001 may be considered a potential sarcopenia treatment [43]. Despite recent advances, the underlying mechanisms characterizing sarcopenia in ageing are still under investigation in preclinical as well as in clinical settings. Because of its negative impact on the quality of life it is necessary to increase the knowledge of this wasting process and to find preventive and therapeutic measures that may also be applied in patients with chronic diseases. Therefore, the aim of this review is to increase clinical awareness of sarcopenia, with a particular focus on current pathogenetic knowledge and therapeutic possibilities that may counteract wasting disorders in chronic heart failure. Sarcopenia in HF Heart failure (HF) is a systemic disease afflicting up to 2% of the population worldwide [44,45]. Although it represents a major burden in terms of expenditure of socio-economic resources and costs [46], the successful accomplishments in diagnostics [47] and treatment [48][49][50] achieved in the last decades have led to an improvement in outcomes [51] and to an increased life expectancy [52]. Consequently, the number of older HF patients with increasing clinical complexity is progressively growing [53]. As a result, a multimodal approach is needed, combining many different medical disciplines to treat non-cardiac co-morbidities such as wasting disorders [54] and to lead to an improvement of different secondary outcomes as well [55,56]. Muscle wasting is one of the main causes for exercise intolerance and ventilatory inefficiency in HF patients [57]. It promotes the aggravation of other clinical conditions and causes a deterioration of quality of life [58]. It is associated with a longer hospital stay [59], more frequent re-hospitalizations [60], and worsened prognosis [61]. In the Studies Investigating Co-morbidities Aggravating Heart Failure (SICA-HF) [62], which enrolled 200 chronic HF patients, the prevalence of sarcopenia in HF patients with reduced ejection fraction (HFrEF) was nearly 20% higher than in healthy adults of the same age [62,63]. Similar results have been observed in HF patients with a preserved ejection fraction [64,65]. Therefore, sarcopenia and chronic HF seem to be intertwined, complicating the progression and outcome of each other [66]. Sarcopenia can even be found in obese HF patients ("sarcopenic obesity" [67]) with a prevalence between 1.3% and 17.5% [68]. Even though these patients show higher amounts of body fat, they have lower muscle mass [69]. Many different mechanisms can influence the muscle metabolism in HF patients such as hyper-activation of the sympathetic system, systemic inflammation, and an alteration of neuro-hormonal release [70]. Elevated oxidative processes, increased activity of the ubiquitin-proteasome system, higher apoptotic activity, and reduced release of the skeletal muscle growth factors contribute to a generalized catabolic shift in the muscular tissue homeostasis [71]. As a result of these alterations, a systemic enhanced protein degradation causes muscle wasting. It is primarily characterized by atrophy of the fast-twitching type II myofibers but also the slow-twitching type I myofibers, decreased muscular capillary density, and fat infiltration [72]. HF patients present with various hormonal disturbances [73]-impaired expression of insulin growth factor 1 (IGF-1) [74], vitamin D deficiency [75], reduced levels of testosterone [76], and reduced levels of growth hormone (GH) [77] have been reported. A cross-sectional study with 3276 elderly participants, with sarcopenia defined by the Asian Working Group on Sarcopenia diagnostic criteria, showed that the appendicular skeletal muscle mass was positively associated with gender and Body Mass Index (BMI), as well as with GH, testosterone, IGF-1, mechanical growth factor (MGF), urea nitrogen, creatinine, and Hb levels, but negatively associated with HDL-C (all p < 0.05). Using logistic multivariable regression analysis, the authors showed an independent association between IGF-1, MGF, BMI, and gender with appendicular skeletal muscle mass (all p < 0.05) [78]. Since the IGF-1/GH axis contributes to the preservation of skeletal muscle mass [72], its modulation by supplementation of these hormones has been hypothesized to treat sarcopenia in older adults [79], but there is still no robust evidence of beneficial effects [80]. Vitamin D deficiency is common in old age [81], and there is evidence that this condition enhances the risk of falls and declined physical performance [82]. Additionally, low levels of vitamin D have been associated with risk of HF in elderly individuals [83]. Its supplementation in adults aged 60 years and older has reported positive results, increasing muscle strength and performance [82]. However, its replacement in chronic HF patients has only demonstrated improvements in the inflammatory profile but not in the exercise capacity nor in outcomes [84,85]. Low endogenous testosterone may represent an independent risk factor for HF [86]. Experimental administration of testosterone as a possible strategy to counteract exercise intolerance and dyspnoea in chronic HF has been investigated, describing positive results regarding reduction of symptoms [87] and increase of exercise capacity [88] in HF patients. However, the safety of testosterone supplementation and its potential negative effects on the cardiovascular system [89] (i.e., ischemic stroke, acute coronary syndrome, myocardial ischemia, congestive heart failure, death from coronary disease) have to be further examined [90]. Even though many HF patients experience a reduced exercise tolerance, resistance training has been demonstrated to be a positive stimulus on muscle mass, muscle quality, and physical performance in patients with HF [91]. The combination with aerobic exercise seems to exert anti-atrophic [92] as well as anti-inflammatory effects [93]. In general, physical activity is beneficial to prevent wasting [94] and to improve quality of life and prognosis in these patients [95,96]. With regards to medical treatment of sarcopenia, supplementation of essential amino acids (8 g/day) have shown positive results regarding the physical performance, but did not increase absolute muscle mass in patients with stable chronic HF and severe loss of muscle mass [97]. Some standard HF medications have demonstrated potential benefits against muscle loss. Angiotensin II-converting enzyme inhibitors (ACE-Is), due to their anti-oxidative and anti-inflammatory effects, could have muscle protective effects [98]. In 1998, Vescovo et al. [99] reported in a small study in 16 HF patients that a 6-month treatment with enalapril (n = 8) or losartan (n = 8) improved exercise capacity. In 2003, a sub-analysis of the Studies of Left Ventricular Dysfunction (SOLVD) [100], including 1929 chronic HF patients, showed that patients taking enalapril had a 19% lower risk of developing cachexia. Whether ACE-Is are beneficial in healthy older people remains unclear-a sub-analysis of the Berlin Aging Study II (BASE-II) study including 838 community-dwelling, elderly people found similar muscle mass, strength, and function in the patients with vs. without ACE-I [101], whereas a double-blind randomized controlled trial in 130 participants ≥65 years with functional impairment showed better functional capacity after 20 weeks with ACE-I vs. placebo [102]. ACE-Is may also help in counteracting angiotensin II-dependent catabolic effects by modulating the GH/IGF-1 axis [103]. Blocking ACE and therefore the generation of Ang-II results in an upregulation of ACE2 expression and activity in skeletal muscle leading to increased levels of Ang1-7 and activation of its receptor (MasR), which contributes to an improved insulin sensitivity [104]. Beneficial effects of mineralocorticoid antagonists on skeletal muscle homeostasis have been postulated [105]. Despite some positive results on muscle quality in rats [106] and on exercise capacity in HF patients [107], Burton et al. [108] did not find an association between spironolactone and better physical function in a randomized placebo-controlled trial including 120 participants aged >64 years without HF. Currently, some compounds for wasting disorders in chronic HF are being tested in preclinical and clinical settings [113]-acylated ghrelin has a potential anti-catabolic effect, as demonstrated by an experimental study conducted in a chronic HF rat model [114], possibly by regulation of the UPS rate-limiting E-3 ubiquitin ligases, muscle RING-finger protein-1 (MuRF-1) and Muscle Atrophy F-box (MAFbx)/atrogin-1 [115]. Moreover, its intravenous administration in a small cohort of HF patients underlined an amelioration of exercise capacity and muscle strength [116]. Anamorelin, a non-peptide ghrelin analogue, was recently tested in healthy young men [117], exhibiting gain in appetite, food intake, and weight. In non-small cell lung cancer patients [118], the same compound produced additional improvement in the lean body mass and in cachexia symptoms. Recently, a chronic HF mouse study showed diaphragm fiber atrophy, an 20% impaired contractile function, and reduced mitochondrial enzyme activities. Post left anterior descending artery-myocardial infarction (LAD-MI) treatment with the MuRF-1 inhibitor compound ID#704946 partially prevented the chronic HF effects on the diaphragm [119]. The negative regulator of muscle mass myostatin (also known as Growth/differentiation factor 8 (GDF-8)) binds primarily to the activin II B receptor (ActRIIB) and is upregulated under catabolic conditions such as sarcopenia and cachexia. The knockout of myostatine gene led to a significantly increased muscle mass in mice [120]. However, the relationship between muscle mass and strength in these mice was not linear. There are spontaneous, natural gene deletions in animals such as Belgian Blue cattle; whippets; and, in a rare case, humans [121]. Human myocardium expressed increased levels of myostatin in end-stage heart failure compared the control group. The related signaling pathways in the myocardium were seen to have a gender effect [122]. Myostatin expressed and secreted by the myocardium is thought to be causal for skeletal muscle wasting in a transaortic constriction chronic HF mouse model [123]. Binding of activin A to ActRIIB in skeletal muscle was shown to induce muscle atrophy that was dependent on a p38beta Mitogen-Activated Protein Kinase (MAPK)-activated signaling pathway and resulted in the upregulation of ubiquitin ligases MAFbx and UBR2 (E3alpha-II), as well as increases in LC3-II, a marker of autophagosome formation [124]. Plasma activin A levels have been reported to be an independent predictor of survival in cancer patients [125]. Interestingly, doxorubicin-induced cachexia was attenuated by ActRIIB ligand blocking. Pre-treatment with soluble ACVR2B-Fc had only a minor impact on the cardiac muscle while it showed strong effects in skeletal muscle at the transcriptome level [126]. These data should make myostatin blocking an interesting strategy to counteract muscle loss in various conditions and diseases, however, while neutralizing antibodies such as MYO-029, AMG 74, LY2495655, or soluble receptor decoys such as ACE-11 and ACE-031 have significant beneficial effects on muscle mass and strength, they also exhibit several side effects including urticaria, aseptic meningitis, diarrhea, confusion, fatigue, and unintentional muscle contractions [79]. Different selective androgen receptor modulators (SARMs) [127] are currently being explored due to their potential anabolic activity but without side effects of androgens. Enobosarm showed some promising results in a double-blind, placebo-controlled phase II trial, enrolling cancer patients with at least 2% weight loss in the 6 months before recruitment. A significant increase in total lean body mass over 4 months was observed in patients treated with 1 mg enobosarm once daily (median 1.5 kg (range 2.1-12.6 kg), p = 0.0012)) and 3 mg enobosarm (median 1.0 kg (range −4.8-11.5 kg), p = 0.046), while placebo resulted in no change (median 0.02 kg (range −5.8-6.7 kg), p = 0.88) [128]. Nonetheless, there was no improvement in muscle strength nor physical performance. GSK2881078 [129], another SARM compound, determined dose-dependent gain in lean mass in healthy subjects, but a major response was observed in postmenopausal women while MK-4541 [130], an androgen receptor agonist with 5α-reductase inhibitor function, exhibited anabolic effects and improvement of muscle function in castrated male mice. Despite the promising results, data from large-scale studies confirming the potential muscle-protective effects of these compounds in HF patients are not available yet. Some of the mechanisms involved in muscular wasting such as mitochondrial dysfunction [131], overactivation of the ubiquitin-proteasome system [132], and abnormal cellular autophagy [66] are still under investigation and might be possible targets for future therapeutic options. Sarcopenia in Cardiac Cachexia A sarcopenic phenotype in patients may precede and present with cachexia in patients with advanced stages of HF [133], a condition associated with an extremely reduced survival [134]. Cachexia has been diagnosed in 19% of male patients with stable chronic HF, while 7% had both sarcopenia and cachexia [62]. Another study confirms that the prevalence of cachexia in chronic HF ranges from 10% [135] to 16% [136]. Cachexia seems to result in a progressive systemic tissue depletion, which involves the skeletal muscle and the fat tissue [137]. Clinically, it is defined by an unintentional weight loss of ≥5% in the last 12 months and three of the following five components: abnormalities in blood tests (increased inflammatory biomarkers, hemoglobin <12 g/dL, and serum albumin <3.2 g/dL), reduced muscular strength, anorexia, low fat-free mass index, and signs of fatigue [134]. It also occurs, under the common denominator of chronic inflammation [138,139], in other various chronic diseases, e.g., in chronic obstructive pulmonary disease (COPD) [140], chronic kidney disease (CKD) [141,142], and cancer [143]. Contributing elements to the deleterious changes in body composition in patients with cardiac cachexia are anorexia, malnutrition, intestinal congestion [144], and an inflammatory cytokine storm, which have also been described as common complications in severe HF [145,146]. High serum levels of adiponectin [147], a protein involved in the cellular energy control of several tissues, have been found in HF patients with cachexia, unrelated to their body mass index [148]. This alteration describes its potential role as a biomarker of body fat changes, tissue wasting [149], as well as a predictor of mortality [150,151] in these patients. Furthermore, an association between adiponectin resistance and peripheral muscle abnormalities was found in non-cachectic HF patients over 61 years [152]. Cardiac cachexia is also associated with myocardial atrophy in rodent models. One of the key regulators seems to be muscle-specific ring finger 1 (MuRF1) [153], an E3 ubiquitin ligase present in skeletal as well in cardiac muscle-experimental small molecule inhibition of apoptotic and ubiquitin-proteasome-dependent proteolysis showed promising results in reducing muscle atrophy and contractile dysfunction in rodents with cardiac cachexia [119]. The current literature does not provide evidence of available or experimental pharmacological agents able to prevent or delay the progression of cardiac cachexia. Some promising results on mitigating side effects of tumors on the heart and on the prognosis through HF medications derive from an experimental experience in rats with cancer cachexia [154]. In conclusion, more efforts are needed to establish a worldwide, standardized definition and assessment of ageing as well as disease-related sarcopenia. More attention has to be paid to the early recognition and staging of wasting processes in HF. Large-scale trials in HF patients are needed to establish the efficacy and safety profile of new agents. Author Contributions: All the authors contributed to the writing. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding.
5,794.6
2020-09-01T00:00:00.000
[ "Medicine", "Biology" ]
Non-Human Moral Status: Problems with Phenomenal Consciousness Abstract Consciousness-based approaches to non-human moral status maintain that consciousness is necessary for (some degree or level of) moral status. While these approaches are intuitive to many, in this paper I argue that the judgment that consciousness is necessary for moral status is not secure enough to guide policy regarding non-humans, that policies responsive to the moral status of non-humans should take seriously the possibility that psychological features independent of consciousness are sufficient for moral status. Further, I illustrate some practical consequences of calling consciousness-based views into question. INTRODUCTION An attribution of moral status to some entity signifies that the entity has non-derivative moral significance, in the sense that the entity's interests matter morally, for the entity's own sake. That human beings have a high level of moral status (sometimes called full moral status) is not in dispute, although there is much disagreement about why this is so. But here I focus on non-human moral status. There is evidence in many societies of a growing willingness to reconsider the moral status of non-humans. 1 This is salutary, though in recent academic literature there has been, in my view, too much focus on consciousness as the foundation of non-human moral status. In section two I canvass the options, and identify the range of views that qualify as consciousness-based-these are views that regard phenomenal consciousness as necessary for moral status. In section three I argue via several routes that the judgment that consciousness is necessary for moral status may not be robust enough to guide policy regarding nonhumans, and I illustrate some practical consequences of calling consciousness-based views into question. PHENOMENAL CONSCIOUSNESS AND MORAL STATUS Phenomenal consciousness is a property of a subject's psychological states (e.g., seeing red, feeling dizzy) in virtue of which there is "something it is like" for the subject to token the state. The thought that the possession of phenomenal consciousness by an entity supports the attribution of (some level of) moral status to an entity is intuitively plausible, widespread, and underlies a wide range of recent work on moral status (Kahane and Savulescu 2009;Shepherd 2018a;Lavazza and Massimini 2018;Sawai et al. 2019;DeGrazia 2020;Lee 2022). Many endorse a consciousness-based approach to understanding moral status. As Shevlin (2020) has it, a consciousness-based approach maintains that phenomenal consciousness is a necessary condition on psychological moral patiency. (This leaves open the possibility that an entity could matter morally for non-psychological (e.g., ecological) reasons.) There are many different ways to fill out a consciousness-based approach to moral status. And there are different ways to resist such an approach. It may be useful to have a map of our options which, as I see it, fall roughly into six families. First, While Consciousness Is Necessary for Moral Status, It Is Not Sufficient There are many ways to flesh out this view. One might maintain, for example, that in addition to consciousness, significant cognitive sophistication must be in place for an entity to possess some level of moral status. While a simplistic conscious being possesses no moral status, on this option, the addition of different forms of attention, or memory, or abstract planning, or language-pick your favorite capacities-makes a moral difference. Those who maintain that "sentience," understood as the capability for pleasant or unpleasant experiences, is necessary and sufficient for moral status, may also take this option (DeGrazia 2021). For sentientists, consciousness on its own may be necessary, but insufficient, simply because some forms of consciousness could be present in an entity without sentience being co-present (see Shepherd 2018a). Second, Consciousness Is Necessary and Sufficient for Possession of (Some Level of) Moral Statusbut It Is Not the Only Contributor to an Entity's Level or Amount of Moral Status To see how this view might work, compare a simplistic conscious entity (a snail, perhaps) and a sophisticated conscious entity (a bonobo, perhaps). One might hold that while both entities have some level of moral status, the bonobo has higher moral status, and not (only) in virtue of having a richer stream of consciousness. Rather, the bonobo has higher status because it is a more sophisticated system overall, and as such, it is able to participate in forms of life that have moral value-the acquisition of knowledge, the development of social relationships, or relationships of care, or what have you. Faden et al. (2021) come close to this view regarding animals. They distinguish sentience from cognition and hold that, for animals, sentience is necessary and sufficient for some level of moral status. In addition, they claim that "cognition, too, is a source of welfare interests" (163). So cognition can contribute to an animal's level of moral status: "the threshold that must be crossed to move to higher levels of moral status is most likely the capacity for modestly sophisticated forms of cognition at some level of reason, including the capacity for autonomous choice and self-awareness" (163). Third, Consciousness Is Necessary and Sufficient for the Possession of Some Level of Moral Status, and It Is the Only Contributor to One's Level of Moral Status Experientialist theories of well-being maintain some version of the following claim: "only what affects our experience can alter someone's wellbeing" (van der Deijl 2019, 1769, see also Bramble 2016). This is often connected to a claim about intrinsic value, namely, that the only things that are intrinsically good or bad for someone are their conscious experiences. On views like these, one might hold that it is only the potential richness-or some related property-of some entity's conscious experiences that determines their level of moral status. Cognition may thus make an indirect contribution to moral status, if cognition alters the richness of conscious experience. But there may be other routes to richness-cognition, or any other nonconscious feature of mentality, is on this view inessential. On all of the above options, there is no moral status without consciousness. So these three options are varieties of a consciousness-based approach to moral status. As already noted, at least amongst philosophers and bioethicists, this approach seems to enjoy a significant majority. And in spite of ample room for disagreement underneath the banner of a consciousness-based approach, this approach is unified by what we might call an judgment of necessity. 2 Some aspect of consciousness is, at minimum, necessary for the possession of moral status. Intuitions in support of this judgment can be elicited by a range of cases. Perhaps the most common kind of case involves zombies (Chalmers 1996) or partially zombified people (Siewert 1998). This kind of case asks us to begin with the mental life of something familiar, like an adult human. In a case of partial zombification (or what Siewert called "phenolectomy"), we imagine aspects of consciousness stripped away. We can imagine a mental life that is functionally identical to our own, but with no phenomenology of smell, or no conscious experience of pleasure or pain, for example. In a case of total zombification, we imagine all the same psychological functionality in the absence of any consciousness. For such a zombie, it is said, "there is nothing that it is like." The intuition that many have is that without consciousness, the interests of the entity in question are no longer morally significant. Of course, zombies do not present policy problems in our world. But a wide range of non-humans do, and if one adopts a consciousness-based approach, then a central question will be whether the nonhuman in question is conscious. If not, moral consideration need not go any further. If yes, there may be massive practical implications. Here, for example, is 2 For some, this judgment seems to be underwritten by a strong intuition. Others find their way to this judgment via argumentation, or via considerations of theoretical parsimony. DeGrazia applying his moral status framework to future artificial intelligence technology. [I]f we reasonably believe a robot is sentient, we should give its apparent interests equal moral weight to our comparable interestsan immediate application of which is that we may not use them as slaves or uncompensated servants. (DeGrazia 2021, 52) Not everyone takes a consciousness-based approach to moral status, however. Here are three types of view that do not. Fourth, Consciousness Is Irrelevant for the Possession of Moral Status One might have this view if one were a kind of illusionist about consciousness. According to illusionism, phenomenal consciousness does not exist, and our belief that it does is due to an illusion our minds create and sustain. As Frankish has it, "our sense that there is something it is like to undergo conscious experiences is due to the fact that we systematically misrepresent them as having phenomenal properties" (2016,11). Now, the illusionist is not forced to take this option-they might hold that in worlds where consciousness exists, it does ground moral status (see Kammerer 2019). But it is coherent to maintain both illusionism about consciousness and the view that consciousness is irrelevant to moral status. And one can see that illusionism might create some motivation to accept this view. Fifth, Consciousness Is Neither Necessary nor Sufficient for Possession of Moral Status, but It May Be a Contributor to an Entity's Level of Moral Status I know of no one who argues for this view in print, but one might get to such a view by taking a kind of objective list approach to the value of an entity, where consciousness is only one item on the list (and where multiple features need to be co-present for moral status), or where consciousness is able to instrumentally contribute to the realization of other items on the list. Sixth, While Consciousness Is Not Necessary for Possession of Moral Status, It May Be Sufficient Amongst those who do not endorse a consciousnessbased approach, this is the most popular option. One might get to it in any of several ways. Peter Carruthers (1999), for example, has argued that though consciousness can be morally relevant, "the psychological harmfulness of desire-frustration has nothing (or not much … ) to do with phenomenology, and everything (or almost everything) to do with thwarted agency" (479). Carruthers is suggesting, in effect, that capacities for desire-satisfaction might be sufficient for some moral status, even if other aspects of consciousness are likewise sufficient. Similarly, Neil Levy has argued that "A great deal of what matters to us and about us can be explained by functional and representational properties that may not be sufficient for phenomenal consciousness" (2014,127). Their arguments are at least consistent with a family of approaches that enjoy some popularity in veterinary medicine and animal science-what Shevlin (2020) calls affective-state approaches. Shevlin outlines this as a family of "views that (i) identify [psychological moral patiency] with a capacity for undergoing canonically unpleasant states such as pain, nausea, and fear, and (ii) either reject or seek to sidestep the relevance of consciousness to [psychological moral patiency]" (187). Some theorists have articulated versions of this option not with animals, but with future artificial intelligence (AI) in view. We have already noted that while Faden et al. (2021) seem to consider consciousness necessary for animal moral status, they withhold judgment regarding the moral status of AI. Sinnott-Armstrong and Conitzer (2021) go further. If an AI cannot feel pain, it will have no right not to be caused pain. But even if an AI does not feel pain or experience any phenomenal consciousness, that is not enough to show that it does not have any moral rights, because it still might have moral rights that are unconnected to phenomenal consciousness, including, possibly, the right to freedom. An AI that does not feel pain could still access information and use it in making choices, seeking goals, and performing tasks … That would be a basis for its moral right to freedom. (2021,281) While the claim here is put in terms of AI, in my view, if we are to take this option regarding the moral status of AI, I see little reason to withhold it from animals. In sum, then, we confront a range of options regarding the relevance of consciousness to moral status. Although there is much disagreement on details, the leading coalition is the consciousness-based approach. This coalition is bolstered by what I have called the judgment of necessity. Some reject this judgment, and argue that aspects of non-conscious mentality generate moral significance in their own right. It is my aim here to offer support to the minority who reject this judgment. REVISITING WHETHER CONSCIOUSNESS IS NECESSARY FOR MORAL STATUS As we have seen, the consciousness-based approach depends upon the judgment of necessity. I claim that the grounds for this judgment are not robust enough to serve as a foundation for policy that aims to be sensitive to the moral status of non-humans. To support this claim, I offer three arguments. Argument from Illusionism The first argument depends upon illusionism about consciousness (for recent discussions, see Dennett 2016Dennett , 2019Frankish 2016;Kammerer 2021). As noted above, illusionism is the view that phenomenal consciousness does not exist (at least in our world), and our belief that it does is due to an illusion our minds create and sustain. If illusionism is true, what can be said regarding the judgment that consciousness is necessary for moral status? 3 One could hold that consciousness is necessary for moral status, and thus that no one in our world has moral status. Let us set that option aside 4 , and consider the view that, while consciousness would be sufficient for moral status (if anyone were conscious), in our world moral status has different grounds. If one takes this view, one is faced with a long line of philosophers and bioethicists who are seemingly mistaken about moral status. How to explain the fact that many philosophers endorse a consciousness-based approach to moral status? The illusionist might postulate that while we are able to introspectively locate features of psychological life that are morally significant (the presence of pains, the recognition of having achieved a goal, the self-aware pursuit of personal projects and values), we are not able to clearly distinguish these features from the illusory phenomenal properties that seem attached to them. So the judgment of necessity is based upon a cognitive mistake that is systematic, even if not quite as systematic as the mistake that generates the illusion of consciousness. Is this what is really going on in our case? If I were to accept illusionism, I would think so. Introspection is not always reliable. And although philosophers debate introspection's reliability regarding aspects of phenomenal consciousness (Schwitzgebel 2008;Kriegel 2013;Peels 2016), if I were convinced on independent grounds that illusionism is true, I would lose confidence that introspection can accurately pinpoint the features of my own mental life that support a claim to moral status. Argument from Human Ignorance The second argument targets a broader patch of theorists. It involves the claim that the judgment of necessity is based in human ignorance. Consider what happens when we are asked whether there is any value in the mental life of a nonconscious being. There are many ways to arrive at a judgment in response, but a common method is to think about what is valuable in our own mental lives, and to think about whether that kind of thing would be present in the mental life of a non-conscious entity. The problem here is that access to any value in our own mental case is very strongly correlated with consciousness. We lack much access, outside of interpretive work, to the aspects of our mentality that are non-conscious. Many of us will try to picture a stream of consciousness with the lights turned out-a stream of information-processing for which "all is dark inside." But, as we are not beings whose self-awareness, satisfaction of desires, pursuit of pleasures and avoidance of pains, is much dissociated from phenomenal consciousness, it is not clear that we can correctly imagine a creature who would be like this. So, while we may be able to have knowledge regarding the moral significance of consciousness, it is not clear that we are able to have knowledge regarding the moral (in)significance of non-conscious mentality. Some will respond that we do know something here. Insofar as unconscious pains or desire-frustrations or whatever occur in us, they do not appear to bother us at all. So why think they matter morally? One response is that it is not obvious that our unconscious pains have no moral significance. It is easy to see why we would think so-we have enough to focus on in the case of conscious pains, which are accessible to us and which take up much of our waking days. Carruthers makes this point well: [C]onscious subjects are apt only to identify with, and regard as their own, desires which are conscious … from the perspective of the conscious agent, nonconscious desires will seem to be outside of themselves. Such subjects could, then, quite easily be mistaken in denying that the frustration of a nonconscious desire constitutes any harm to them. (1999,478) It is possible to imagine beings for whom pain or desire-satisfaction are significantly divorced from their conscious mental life 5 , and in such cases a case can be made for the moral significance of these unconscious states. Whether these states are morally significant or not, my point here is that insofar as we lack access to these kinds of states, we lack the same kind of knowledge we claim to have about the moral significance of our conscious states. One might offer some general theoretical argument against the moral significance of nonconscious mentality. But the judgment of necessity is often not based upon such an argument. It is based, for many of us, on an imaginative episode. I am arguing that since this episode is incomplete, it is ill-placed to do any real moral work for us. The judgment of necessity is based upon differential access between conscious and non-conscious mentality. So it is not a good guide to differences regarding the moral significance of conscious and non-conscious mentality. This argument does not depend upon the claim that zombies are inconceivable. Zombies may or may not be conceivable in the senses philosophers debate. The point, here, is that our knowledge that consciousness is morally significant is based upon introspection. But what humans introspect is nothing more than various aspects of consciousness. 6 Humans lack access to non-conscious mentality. So if non-conscious mentality had moral significance (i.e., non-derivative value), we could not come to know it in the same way. We may be able to conceive of zombies, but doing so gives us no introspective access to whatever might be of value in a zombie mental life. Arguably, many of us, lacking this access, jump to the conclusion that there is nothing valuable there. But this jump, being based upon ignorance, is unjustified. Argument from Positive Goods The third argument takes a different line, and appeals to the positive goods available to the non-conscious. Begin by considering objective list theories of wellbeing. These are theories that claim that what it is for a life to go well, for a person to have a good level of well-being, is for that life to "contain" a significant number of goods. Theorists disagree about the contents on the objective list, but common suggestions include desire-satisfaction, perfection of one's nature (Hurka 1996), development of one's capacities (Parfit 1984), self-respect, relationships of care (including friendship, romantic relationships, and parenting relationships with children), achievement, and knowledge (see Fletcher (2015) for a review). Certainly, in humans, consciousness is involved in various ways in the expression or development of many of these items. But this is because consciousness is involved in various ways in many aspects of human psychological life, of human action, of human achievement and knowledge and sociality. It is a further step to hold-and it is not something we can claim to know at presentthat consciousness is essentially involved in any of these items. There are many ways to make this point vivid: I will try two. Consider first, then, an entity with a very simple psychological life. I have in mind something like a snail (Schwitzgebel 2020)-an entity with something like 60,000 neurons (compared to 86,000,000,000 in a human), capable of crude forms of learning, mating, and behavioral flexibility. But this creature has only the most simplistic form of cognition. And assume that this creature is not conscious. Now say we come to have a decision-exterminate this creature for some minor convenience. Exterminate it to clear a field for some mediocre music festival. Consciousness-based theorists should have no problem with this, and perhaps many will not. Now compare this entity with one that has a sophisticated psychological life. This latter entity is not conscious. But it is capable of at least analogues of what, in humans, would look like the satisfaction of desires, the acquisition of knowledge, the nurturing of relationships of care, the pursuit of long-term plans, and the realization of significant achievements. Should we exterminate this creature for our minor convenience? The consciousness-based theorist says so. We move rocks around and uproot trees for minor conveniences. These creatures are no different. But I confess that I lean toward the thought that at least the latter entity deserves protection. It does so, arguably, because it is engaged in a range of valuable pursuits-the rearing of its children, the development of relationships with conspecifics and perhaps other species, the pursuit of plans and the satisfaction of goals and desires. In virtue of the presence of these objective goods in its life, the creature has some moral status. Second, consider a creature with significant psychological sophistication, and sophisticated goals and projects, where these goals and projects revolve around helping others. Those who have read Ishiguro's novel Klara and the Sun (2021) might bring to mind something like Klara, the "artificial friend" and protagonist of the novel. We can suppose that this creature lacks phenomenal consciousness. But they retain significant agency, and it is plausible that their goals and projects have moral value. Now suppose that this creature has a limited lifespan, and that they desperately want to perform a series of actions that places certain others in a position to succeed in their life. The question is: does this being have moral status in virtue of having the psychological sophistication to have these goals and projects? If one is worried that the others they wish to help are also non-conscious, just imagine that these others are normal human beings. What I am asking is whether consciousness in the helper is essential for their moral status. I do not see a good case for thinking so. Analogies with humans are conceptually dirty given the role of consciousness in our lives. But even so, I have a few goals and projects that involve helping others. This thought is contentious, but I submit that the success of these goals and projects is of great significance to the value of my own life, independently of any consideration to do with phenomenally conscious experiences related to these projects, and independently of any enjoyment or suffering related to these projects. More directly, I submit that it would be wrong to arbitrarily impinge upon the helper's execution of their planned series of actions, and that the wrongness obtains in virtue of the moral quality of the creature's goals and projects. The creature deserves, I want to say-and ceteris paribus, of course-the chance to make the difference that they wish to make. And to say that the creature deserves this kind of moral consideration is just to say that the creature has some degree of moral status. Often, discussion of the relationship between consciousness and moral status begins with the presence or absence of suffering. There is good reason for this. Many hold a version of sentientism-a view on which, roughly, it is the valenced conscious experiences (experiences of pain, pleasure, grief, joy, etc.) that are the primary ground of consciousness's moral significance. And one might think that sentientism explains the moral significance of consciousness in terms of how experiences feel-valenced experiences are valuable or disvaluable because they feel good or bad. The value is in the phenomenal character. If this is one's thought, then to imagine a non-conscious entity will be to imagine an entity with the value stripped away. The argument from positivity emphasizes that there may be more of value in a mental life than aspects of phenomenal character. Some will, of course, reply that these valuable elements nonetheless rely upon the presence of consciousness in some way. The argument from positivity identifies several candidates that do not conceptually rely on the presence of consciousness. It thus serves as a corrective against any inference from the value of phenomenal character in a mental life to the belief that without phenomenal character, all value goes away. People will disagree. But if you hesitate, that is a sign that your confidence in the judgment that consciousness is necessary for moral status is wavering. And if this judgment is not robust against these arguments, then it is doubtful that this judgment should be given a central place in policy deliberations that attempt to be responsive to the moral status of non-humans. Meta-Argument from Uncertainty The consciousness-based family of views is deeply ingrained, and I do not think the arguments I have offered will move many completely off of the view. If, however, the arguments I have offered diminish confidence in the judgment of necessity, a different move may be available. In short, one might tentatively endorse a consciousness-based view of moral status, while endorsing a more open-ended approach to policy that appeals to the moral status of some class of animal or entity. Speaking for myself-as someone who has defended a consciousness-based view in the past-these arguments do diminish my confidence in the judgment of necessity. Though I feel the pull of the intuition that underlies this judgment, I now think that the epistemic credibility of this pull is less than pristine. Why think this matters? Facing a difficult philosophical question, it is common to take into view arguments for and against a position, to examine theoretical and practical consequences of a position, and to end up with a mixed assessment. Often philosophers will endorse a view tentatively, admitting that a different view has some merit, and that their own view has some regrettable consequences. But they will plow ahead, depending on the view as they think through related issues and disputes. This is well and good when charting a philosophical picture of reality. We have to sacrifice perfection for consistency and coherence across a developing range of theoretical commitments. But when we apply philosophical positions to practical problems, we often have to compromise. It is desirable to find policies that can be grounded on features about which there is overlapping consensus. But, failing that, we have to find a compromise that is acceptable to most. And in figuring out which options may be acceptable to most, we should consider not only which view might win a majority vote, but also the levels of confidence stakeholders have in the range of views and options available. In general, low confidence should suggest a greater willingness to compromise-and more so, arguably, when the stakes of getting it wrong are high, as is the case regarding the practical implementation of theoretical considerations about the moral significance of consciousness. It is, of course, difficult to say what levels of confidence should generate a willingness to compromise. I am not going to put an artificial number on it. The main aim of this paper is to encourage bioethicists, neuroethicists, and policy makers to consider whether policies that depend upon widely-held views about the value of phenomenal consciousness are plausible enough to go forward without consideration of alternative sources of value. And the point of this sub-section is to insist that even if one is not fully swayed by the arguments offered above, one might still be open to endorsing policies that offer protections on the basis of consciousness-independent features. A further difficult question is this. Once we reject the judgment of necessity, what consciousness-independent features of an entity might be sufficient for moral status, or might impact an entity's degree of moral status? Answering this is to a large extent beyond the scope of the present paper, only because the theoretical task here is fairly big. The literature already offers several options-self-awareness, cognitive sophistication, the presence of affective states or systems, the ability to plan, the possession of narrative identities, and more. 7 One leading family of views tends to lumps these kinds of features together under the heading of "cognitive sophistication" (see Jaworska and Tannenbaum 2021). But it is unclear whether talk of cognitive sophistication is fine-grained enough to identify the range of features that may be morally significant. In my view, it is probably better to think, first, in terms of features that are, in themselves, morally valuable, and thus may ground attributions of moral status. These may be related to cognitive capacities, and they may not-the notion of "cognition" does not have sharp boundaries. One might instead, for example, talk of the capacities required for features such as desire-satisfaction, development of one's capacities, self-respect, relationships of care, achievement, or knowledge. In the rest of this paper, I will gloss over such options by speaking of "consciousness-independent features." Practical Consequences How would a rejection or a downgrading of the judgment of necessity look in practice? In this sub-section, I consider two examples. 8 Consider, first, the precautionary approach to the moral status of non-human animals. On a consciousness-based approach, the precautionary approach is motivated in part by epistemic difficulties confronting any attempt to decide which animals possess consciousness, and which do not. 9 Birch (2017) proposes a precautionary principle that specifies an epistemic bar we need to clear, and an action rule that we need to follow once the bar is cleared: BAR: For the purposes of formulating animal protection legislation, there is sufficient evidence that animals of a particular order are sentient if there is statistically significant evidence, obtained by experiments that meet normal scientific standards, of the presence of at least one credible indicator of sentience in at least one species of that order. ACT: We should aim to include within the scope of animal protection legislation all animals for which the evidence of sentience is sufficient, according to the 7 One might doubt that a system could possess some of these features (e.g., self-awareness or affective states) independently of phenomenal consciousness. But unless we assume a version of functionalism about consciousness, it is possible that a system could display whatever functional or behavioral manifestations we associate with, e.g., self-awareness or affective states, while lacking phenomenal consciousness. Of course, whether emotions, sensory states like pain, or self-awareness could be fully independent of phenomenal consciousness are topics of dispute in the philosophy of mind (see Rosenthal 1991;Shepherd 2017), so it remains debatable just what items might be on a list of features independent of consciousness. 8 A referee raises the point that a rejection of the judgment of necessity might create a route to the consideration of the moral status of plants. For some argue that plants display features of cognition, mentality, or goal-directed behavior (Maher 2017). (In step with this, some also argue that plants may be conscious.) In my view, we should not rule out the possibility that folk psychology is very wrong about the mentality, or the moral status, of something. So we should be open to changing our minds about things like plants if our best evidence pushes in that direction. 9 Aspects of the problem have been covered by many (see Dawkins 2008;Shepherd 2018b;Carruthers 2019;Murray 2020;Shevlin 2020;Birch 2022;Sawai et al. 2022;Johnson 2022). Ways to address it have been fruitfully discussed as well (see Shea and Bayne 2010, Shea 2012, Birch 2022. But the problem remains serious. standard of sufficiency outlined in BAR. (Birch 2017, 5) I regard a precautionary approach to animal moral status as promising, even if issues remain regarding, for example, the appropriate place for an evidential bar, how to formulate a principle that is neither overnor under-inclusive (Woodruff 2017), and whether and how a precautionary principle could reflect something like levels of moral significance in differently conscious animals (Klein 2017;Shepherd 2021). But if we downgrade the judgment of necessity, we have to think about a precautionary principle that is not (only) consciousness-based. First, since consciousness-independent factors may be present in animals in the absence of solid evidence regarding sentience, the evidential bar needs additional sources, beyond the science of phenomenal consciousness. 10 Second, the action rule may need to expand, and to take a disjunctive form. Third, since consciousness-independent factors may influence an animal's level of moral significance, attention to tradeoffs between animals may need to attend to evidential factors beyond those regarding phenomenal consciousness. Consider, as a second example, the structure of recent policy debate regarding cerebral organoids. It is often assumed that the key issue is the determination of whether an organoid is or has the potential to develop consciousness (Koplin and Savulescu 2019;Niikawa et al. 2022). Koplin and Savulescu, for example, argue that while research using non-conscious organoids can be regulated by "existing frameworks for stem cell and human biospecimen research" (2019, 765), additional regulation is needed in the case of conscious or potentially conscious organoids. But it may soon be possible to integrate cerebral organoids (whether human or not) with synthetic material, creating synthetic biological intelligences. In unpublished work, Kagan et al. (2022) report the creation of DishBrain, a structure that places organoids into a computational framework using a silicon highdensity multi-electrode array. This set-up allowed them to control the feedback organoid neurons receive, and to monitor neural output. After embedding this set-up computationally into a simulation that functionally mimicked the game "Pong," Kagan et al. report that the organoid-silicon system demonstrated evidence of learning in response to feedback. DishBrain serves to illustrate the possibility of organoid-involving systems that achieve some level of agency. It may soon be the case that organoid-involving systems display a range of morally relevant, consciousness-independent features. If so, pressing forward with a consciousness-based approach to moral status may fail to account for the moral significance of a range of systems relevant to biomedical and neuro-computational research. This is because existing frameworks for stem cell and human biospecimen research do not consider the relevant range of possibilities. If we downgrade the judgment of necessity for policy-guidance purposes, the relevant range of possibilities will need to be considered, with an eye to the moral significance of consciousnessindependent features. CONCLUSION I have outlined the primary options for taking a consciousness-based approach to non-human moral status, as well as options for taking an alternative approach. And I have offered arguments against the judgment at the root of the consciousness-based approach. Moving beyond a consciousness-based approach will require further consideration of the features that, independent of consciousness, may have moral significance.
8,162
2022-12-07T00:00:00.000
[ "Philosophy" ]
Using the Google Earth Engine cloud-computing platform to assess the long-term spatial temporal dynamics of land use and land cover within the Letaba watershed, South Africa Abstract Population growth and environmental shifts have elevated the pressure on land use and cover (LULC), necessitating vital management and adaptive strategies to preserve the balance between ecosystem services and human well-being in watersheds. It's pivotal to understand the implications of human-induced shifts from natural to human-dominated surroundings This study utilized Google Earth Engine (GEE) to analyze 31 years of LULC changes in Letaba watershed. Using GEE's random forest, LULC classes were mapped with 93% to 99% accuracy across four timeframes (1990, 2000, 2010, and 2021). Trends revealed declining water bodies (-4%), bare surfaces (-2%), natural forest (-3%), and grassland (-3%), while shrublands, plantations, and built-up areas increased at annual rates of 41%, 24%, and 47% respectively. This transformation reflects population-driven shifts, necessitating adaptive strategies. Given the importance of plantations for income, embracing climate-smart agriculture could ensure long-term food and environmental security, thus addressing the evolving dynamics in Letaba watershed. Introduction Land resources form the most important source of livelihoods and pro key ecosystem goods and services globally.They play a vital role in supporting human needs, such as housing, food production, and settlement, as well as ecosystem services like climate regulation, soil formation, nutrient cycling, and supporting natural vegetation (Alemayehu et al. 2009;Amsalu et al. 2007;Mekuriaw 2017).However, the rapid growth of the population has led to increased pressure on land resources, resulting in changes in Land Use and Land Cover (LULC).Land use refers to how humans utilize and manage the land for various purposes.Land cover, on the other hand, refers to the physical and biological surface characteristics of the land (Jansen and Di Gregorio 2002;Mendoza et al. 2011).Such changes are the primary cause of global environmental modifications (Liping et al. 2018).For instance, Salazar et al. (2015) found that LULC changes impact land surface climate feedbacks, altering the exchange of heat, moisture, momentum, trace-gas flux, and albedo, which in turn affected local and regional climate.Additionally, Bufebo and Elias (2021) reported that LULC changes leads to soil and water quality degradation, loss of biodiversity, and reduced ability of watersheds to sustain natural resources and ecosystem services. To effectively manage LULC, it is crucial to understand the processes driving their changes, the forces transforming natural habitats into human-dominated environments, and their consequences.This understanding will help in implementing appropriate LULC planning practices within watersheds, which play a vital role in managing soil and water resources worldwide (Grecchi et al. 2014;Kerr and Chung 2002).However, consistent monitoring has been lacking in many sub-Saharan African countries, especially in rural catchments. Remote sensing technology continued to provide the spatial and temporal coverage, and is necessary for monitoring the environment, due wide observation range (Cui et al. 2022).Globally, numerous studies have been carried out on LULC change monitoring based on remote sensing.Shen et al. (2022) used time series Landsat to classify the LULC in Huangshui river Basin in China from 1987 to 2018 based on Random Forest (RF) and detected bidirectional spatio-temporal changes based on the distribution probabilities.Similarly, Zurqani et al. (2018), utilized Landsat 5 and 8 images to map land use changes of Savanna River basin in United State of America (USA) from 1999 to 2015 using on RF.Pan et al. (2022) classified LULC using Landsat 5 and 8 images in Australia and USA based on CART and RF.Furthermore, Leta et al. (2021) used Landsat to model and predict LULC in Upper Blue Nile Basin Ethiopia from 1990 to 2019 based Land change modeler and further predicted future LULC for 2035 and 2050.These studies confirm the effectiveness of remote sensing in extracting LULC changes. Availability of satellite imagery data from platforms such as Landsat, Sentinel, MODIS, and other commercial satellites has greatly improved.These datasets provide consistent seasonal and long-term coverage, allowing for the analysis of LULC changes over time (Mashala et al. 2023).However, traditional methods of hardware and software are time consuming, as they require much time to perform image processing, mosaicking when working on a larger scale (Gorelick et al. 2017;Kumar and Mutanga 2018).This leads to massive challenges in remote sensing data download and low processing efficiency on large scale. The emergence of remote sensing cloud computing, and cloud storage platform provide new technical for downloading and processing massive remote sensing data.Cloudcomputing platform Google Earth Engine (GEE) is convenient for remote sensing data for desktops which enable high speed analysis using advanced processing techniques (Hoque et al. 2022;Sidhu et al. 2018).The cloud computing can integrate more than 200 remote sensing dataset and provide python and JavaScript to facilitate users to process data according to their own needs (Shafizadeh-Moghadam et al. 2021).GEE host advanced classification which enables one to run supervised learning algorithms across huge datasets (Lee et al. 2016;Cui et al. 2022).Algorithms that are found in GEE include Random Forest (RF), classification and regression tree (CART), support vector machine (SVM), Naive Bayes (NB) classifiers can accurately distinguish classes within a heterogeneous watershed. However, most studies revealed the effectiveness of RF in classifying change by achieving higher accuracy over CART, NB, and SVM (Delalay et al. 2019;Pan et al. 2022;Loukika et al. 2021).Although RF has proved their capability in differentiating classes, the accuracy of mapping LULC depends on training and validation of data.The common methods that are used for training point in GEE are manual labelling (Yangouliba et al. 2023), semi-automated (Verde et al. 2020) and automated training (Chaves et al. 2023;Zhang et al. 2022).Automated training methods in remote sensing and geospatial analysis offer several advantages that enhance efficiency, scalability, and objectivity in the classification and analysis processes (Chaves et al. 2023).These advantages are particularly valuable when working with large and complex datasets, covering extensive geographic areas or temporal periods (Zhang et al. 2022).However, automated training might struggle with capturing contextual traces and can be limited to spectral information alone (Zhang et al. 2021).On the other hand, collected field points data provides ground truth accuracy, contextual insight, and robust validation potential.The semi-automated approach acknowledges that human judgment is crucial for recognizing complex patterns and understanding the context (Xiong et al. 2017). Researchers have conducted detailed research and analysis and land cover changes focusing on sub-watershed (de Sousa et al. 2021;Kulithalai Shiyam Sundar and Deka 2022), urban areas (Lin et al. 2020;Agariga et al. 2021;Sumari et al. 2020), coastal areas and ecological areas (Abijith and Saravanan 2022).These studies demonstrated processing power of using GEE for mapping and monitoring LULC.Besides the high-speed processing power only a few studies have attempted to use this cloud computing platform in sub-Saharan Africa countries to monitor and map the spatiotemporal changes of LULC on the watershed (Kombate et al. 2022;Leta et al. 2021;Yangouliba et al. 2023).Therefore, the study aims to determine the spatial and temporal extent of LULC changes in Letaba watershed by achieving the following objectives 1).To determine and select suitable variables for predicting LULC, 2).To evaluate the performance of the RF machine learning algorithm in GEE platform in detecting and mapping LULC within the Letaba watershed from 1990 to 2021, and 3).Further discuss potential impacts of change in LULC on the watershed. Study area The study was conducted in the Letaba watershed situated between the longitudes 30 0 0 and 31 40 0 East and latitudes 23 30 0 and 24 0 0 South, in Limpopo province, South Africa (DWAF 2006) (Figure 1).The Letaba watershed covers the surface area of 1451864 ha.The area receives a mean annual evapotranspiration ranging from 1100 mm to 1300 mm, rainfall of 300 mm to 400 mm, annual runoff of 574 million m 3 and the mean annual temperature ranges from the minimum 18 C to maximum of 28 C in the mountainous and in the lowlands respectively.The major tributaries of the Groot Letaba River which drained into the catchment are Klein Letaba, Middle Letaba, Molototsi and Litsetele rivers (DWAF 2006).The Letaba catchment has more than 20 constructed dams and weirs which resulted in the watershed being extremely regulated.The water resources that are available within the watershed are overexploited to meet the demand for domestic use and the need for commercial (afforestation, industry, and irrigation). Field data Field data collection for land cover types was conducted manually between February and March 2021, using handheld Garmin global positioning systems (GPS).The training dataset consisted of a total of one thousand and fifty (1050) ground truth points, with a hundred and fifty points allocated for each individual class.A balanced distribution strategy for class training samples was employed in the study.This approach was adopted to prevent any bias towards specific classes (Mellor et al. 2015).The use of balanced data distribution is associated with enhanced prediction accuracy and reliability across all classes (Mellor et al. 2015).The collected training data was converted into shapefile using ArcGIS, and subsequently imported into the GEE platform for the purpose of model training and validation for the LULC assessment of the Letaba watershed in 2021. Similarly, high-resolution data from Google Earth was used to generate one thousand and fifty (1050) training data points serving as ground truth references for the year 1990.These points were then converted into shapefiles using ArcGIS and integrated into the GEE platform to facilitate the training of the LULC map for 1990.The same training dataset was further utilized for mapping LULC changes in the 2000 and 2010 images.In accordance with established standards in machine learning evaluation criteria, the imported training data areas were partitioned into two distinct sets: 70% for training and 30% for validation (Figure 2).As a result of this methodology, a comprehensive classification of seven distinct land cover types was executed within the confines of the Letaba watershed (Table 1). Data acquisition and processing The Landsat 5 Top of Atmosphere (TOA) reflectance products, denoted as LT05/C01/T1_ TOA, were utilized for the years 1990, 2000, and 2010.Similarly, Landsat 8 TOA reflectance products, identified as LC08/C01/T1_TOA, were employed for the year 2021.These data resources are available within the Google Earth Engine (GEE) cloud database, accessible via this link: https://earthengine.google.com/.Both Landsat 5 and Landsat 8 platforms encompass 7 and 13 spectral bands respectively, each with a spatial resolution of 30 meters.The TOA reflectance values for each spectral band within the GEE database were used in this study.To mitigate the impact of clouds and shadows, the cloud filtering function (QA_PIXEL) was applied, resulting in the removal of these unwanted elements from all filtered images.The process of filtering and creating mosaics was executed using the 'filter' function.The date parameters (start and end) were set to cover the periods from January 1st to December 31st for all four respective time intervals.Furthermore, the filtered images were cropped to match the boundaries of the study area through the utilization of the 'filter.Bound()' function.The resulting stacked images underwent normalization to account for illumination variations and to minimize the presence of clouds.Inclusion criteria for images involved selecting those with less than 20% cloud cover, except for the 2010 image, for which conditions necessitated the use of images with less than 50% cloud cover due to climatic considerations.The images were enhanced and smoothed to produce quality results. Predictor variables Within the GEE platform, the random forest (RF) algorithm was employed to identify the most influential spectral bands from Landsat images.This selection process was important for the prediction and mapping of LULC across the Letaba watershed.Utilizing this technique, researchers aimed to reduce the redundancy within explanatory variables (Dube et al. 2014;Mudereri et al. 2020). Image classification and accuracy assessment Google Earth Engine (GEE) provides a range of supervised classifiers, including the classification and regression tree (CART), support vector machine (SVM), Naive Bayes (NB), and Random Forest (RF).All four of these classifiers were tested for their effectiveness in categorizing LULC changes within the Letaba watershed.Following testing, the RF classifier emerged as the most superior performer among these options.As a result, it was subsequently utilized to map the dynamics of LULC within the Letaba Watershed.Support for this decision was also drawn from existing literature.The RF classifier available in GEE (ee.classifier.smileRandomForest)has consistently demonstrated outstanding performance in LULC classification when compared to CART, SVM, and NB, as attested by Abijith and Saravanan (2022), Gxokwe et al. (2022), andKulithalai Shiyam Sundar andDeka (2022).Therefore, the algorithm was selected in this study because of its robustness and predictive accuracy even when applied to analyze data with strong noise (Zurqani et al. 2018). The RF classifier is a non-parametric algorithm that constructs an ensemble model of decision trees from a random subset of features and a bagging randomisation process.RF has a greater processing power for data noise; overfitting and can work well with complex data with high accuracy (Piao et al. 2021).The classifier uses random sample data to generate multiple decision trees independently.The best node of each decision tree relies on a randomly selected subset of input prediction variables (Zhao et al. 2021).The process continues until the samples are similar and splitting no longer occurs.The final class prediction is chosen using the majority voting of the decision tree. For LULC classification, accuracy assessment is imperative to explain the agreement between the ground truth and the classification outcome.To evaluate the classification accuracy for the year 2021, a subset of four hundred and twenty (420) reference points (representing 30% of the dataset) was used.Similarly, for the year 1990, 2000, and 2010 a total of four hundred and twenty (420) was also used for validation of the accuracy.The validation dataset was employed to generate a confusion matrix within GEE for accuracy assessment.The assessment metrics encompass producer accuracy (PA), overall accuracy (OA), and user accuracy (UA), with the exclusion of Kappa due to its criticized suitability for accuracy assessment (Pontius and Millones 2011). Land use change detection The method of post-classification was employed to determine the magnitude trend and rate of LULC change within Letaba watershed.An analysis of area comparison was conducted by subtracting the total area for each class, resulting in both positive (increasing) and negative (decreasing) values.The percentage and rate of LULC change were calculated using the subsequent formulas: The proportion of each LULC class type Ai ¼ Ai/At à 100 The change for each LULC class type was computed as follows: Ai ¼ Ait 1 -Ait 2 and the Annual rate of change: Air ¼ ((Ait 1 /Ait 2 )-1) à 100% In these formulas, i represents the particular area of the LULC class type, At designates the total study area, and Ai % denotes the proportion of each LULC class area.Ait 1 and Ait 2 refers to the total area of LULC class type in specific years 1 and 2 respectively (Lu et al. 2013;Piao et al. 2021).Air refers to the rate of change, which magnitude of change between the specified years and the range of change from 1990 to 2021 was analysed (Piao et al. 2021;Tian et al. 2014).Notably, this method offers the advantage of providing insights into both magnitude and direction of change, while also indicating the number of areas that have undergone alterations. Variables of importance The RF classifier implemented within GEE was used to train the sample with settings of 100 decision trees in mapping LULC in Letaba watershed.The RF classifier demonstrated a notably overall high level of accuracy in effectively identifying and delineating various LULC categories across the watershed.The spectral bands of Landsat images, specifically band 2 (blue), band 5 (near-infrared or NIR), and band 6 (shortwave-infrared or SWIR), were identified as the most crucial variables for the RF classification process (Figure 3).These specific bands were chosen to predict LULC classes for the years 1990, 2000, 2010, and 2021.This selection served purpose of minimizing data redundancy among explanatory variables and enhancing the classification precision. The cloud-based methodology adopted for this study revealed a notable capability to accurately define LULC classes, offering a visual representation that effectively covered the majority of classes present within the Letaba watershed.In order to optimize the predictive performance, less significant variables were omitted from consideration, as they did not contribute meaningfully to the prediction process.The selection of valuable bands was able to distinguish classes from one another and increased the classification accuracy of the study area Land cover and land use classification and accuracy The RF classifier successfully distinguished LULC classes, achieving OA of 0.95, 0.97, 0.93 and 0.99 for the year 1990, 2000, 2010 and 2021 respectively (Table 2).UA and PA values, derived from the confusion matrix, were computed for each class type across all years.Plantations and natural forests occurred in close proximity; however, the RF model effectively captured variations in the upstream area of the Letaba watershed.Notably, the class with the highest accuracy was shrublands, exhibiting a perfect 100% with PAs, and UAs ranging between 93% and 97% across all time periods.Likewise, water bodies were accurately classified, with PAs ranging from 92% to 98%, and UAs ranging from 95% to 100%.Comparatively, lower classification accuracies were achieved for the Natural Forest class, with PAs varying from 83% to 100% across all time frames.Regarding error assessment, the 2021 classification showcased lower values for error commission (EC) and error of omission (ER), ranging between 0% and 5%.Conversely, the classification for 1990 exhibited higher values of ER (14%) and EO (12%) respectively. The spatiotemporal changes of the Letaba watershed in 31 years Over the past 31 years, significant shifts in land use and land cover (LULC) have been observed within the Letaba watershed (Figure 4, Table 3).By the year 2021, there has been noteworthy transformation in the LULC composition.Particularly, the natural forests and grasslands have undergone deforestation and degradation, resulting in a decline in their respective percentage covers from 13% and 23% in 1990 to 6% and 9% in 2021.Furthermore, there has been a reduction in the percentage cover of bare surfaces and water bodies, which decreased from 44% and 13% in 1990 to 29% and 1% in 2021, respectively.In contrast, there has been a considerable increase in the percentage covers of plantations, shrublands, and built-up areas.These categories have seen their coverage expand from 4%, 2%, and 2% in 1990 to 24%, 15%, and 16% in 2021, respectively. The rate of change revealed different changing progression for each class category from 1990 to 2021 refer to Table 4. Bare surfaces, water bodies, natural forest and grasslands areas reduced trend at a rate of À2%, À4%, À3% and À3% respectively whereby plantations area increased at rate of 24%.However, built-up and shrublands showed a clear extension trend which significantly increased from the year 1990 to 2021 with a rate of 47% and 41% respectively.The highest intensity of built-up area and plantations occurred between 1990 and 2021, and for shrublands occurred between 2010 and 2021 (Figure 5).Conversely, the highest decline intensity of water bodies, natural forests and bare surfaces occurred between 2010 and 2021, and grassland occurred in two time periods between the year 2000 and 2010 and 1990 and 2021. Moreover, the study revealed a reduction in the area covered by natural forest to plantations in the upstream of the watershed.Whereas grasslands are hastily replaced by shrublands downstream of the watershed.The bare surfaces are swiftly substituted by built-up and plantations in the middle stream of the watershed.Although bare surfaces areas are decreasing and are still the most dominating land cover in the watershed.The water bodies are diminished by plantations and construction of dams alongside the streams in the middle and upper streams which prevents the flow of the water downstream of the watershed.Additionally, water bodies downstream of the watershed are becoming drier and converted into bare surfaces. Discussion Accurate detection and mapping of LULC are important for understanding the drivers that results in change of the natural environment into human dominant.Advanced data The aim of this study was to determine the spatiotemporal changes in LULC within the Letaba watershed from 1990 to 2021 using GEE.The scalability and flexibility of GEE's platform demonstrated the capabilities in continuous monitoring LULC changes and support targeted management and conservation strategies across diverse landscapes and regions (Pan et al. 2022).The limitations of GEE include spatial and temporal resolutions of satellite image, which may not be adequate for capturing fine-scale changes in a heterogeneous landscape.This can lead to challenges in accurately identifying and classifying different land cover types, in areas with complex and mixed land use patterns.Cloud cover can be a significant issue in tropical regions, this can limit the availability of cloudfree imagery, which affects the temporal consistency and reliability of LULC change detection (Delalay et al. 2019).Additionally, GEE's algorithms and processing methods might not be optimized for the specific characteristics and challenges of mapping LULC changes in watersheds.GEE might perform well in certain regions but may lack accuracy and generalization when applied to complex and heterogeneous semi-arid landscapes (Gxokwe et al. 2022).The interpretation of LULC changes requires careful consideration of local socio-economic and cultural factors.GEE's remote sensing approach might not fully capture the drivers and underlying reasons for LULC changes, limiting the understanding of the complexities involved. Based on the analysis using the GEE the use of RF for classification provided useful information in detecting and mapping spatiotemporal changes of the Letaba watershed.The results from the RF analysis demonstrated the efficacy of utilizing specific spectral band values to differentiate various land cover classes present in the watershed.Notably, the Blue, Near-Infrared (NIR), and Shortwave Infrared 1 (SWIR1) bands exhibited significant discriminatory power among these classes (Mudereri et al. 2020).Among these bands, the NIR band emerged as the most pivotal due to its sensitivity to factors like vegetation type, water content, density, and overall vegetation health (Forkuor et al. 2018;Kyere et al. 2019).Likewise, the SWIR1 band's sensitivity to moisture levels and shades in the forest stand structure proved valuable in the classification process (Izadi and Sohrabi 2021).Additionally, the Blue band played a crucial role in distinguishing between soil and vegetation, as well as differentiating deciduous trees from coniferous vegetation (Acharya and Yang 2015;Zeferino et al. 2020). The study's findings also suggested that a strategic selection of a few informative bands could potentially outperform the classification achieved using the entire set of wavebands (Cai et al. 2018;Dube and Mutanga 2015;Mudereri et al. 2020).While the spectral bands alone demonstrated the capability to differentiate land cover, the incorporation of indices such as the Normalized Difference Vegetation Index (NDVI), Modified Normalized Difference Water Index (NDWI), Normalized Difference Built-Up Index (NDBI), and slope could potentially provide additional value in the prediction and mapping of LULC (Tolentino and Galo 2021).Despite RF's success in effectively mapping LULC within the Letaba area, it's important to acknowledge certain limitations associated with the classifier.When dealing with datasets characterized by imbalanced class distributions, RF might exhibit a bias towards the majority class, potentially resulting in decreased accuracy for minority classes and the risk of overfitting (Pan et al. 2022). The information regarding precision and accuracy presented in the classified map outcomes holds significant importance for users aiming to effectively utilize the generated maps (Munthali et al. 2019).Therefore, conducting accuracy assessment remains a pivotal step in image classification, and this study demonstrated commendable accuracy results, even though some errors arose from spectral confusion among natural forests, bare surfaces, and plantations.The established standard level for overall classification accuracy typically stands at 85% (Yesuph and Dagnew 2019).However, the outcomes of this study outperformed this standard, achieving remarkable overall accuracy rates of 95%, 97%, 93%, and 99% for the years 1990, 2000, 2010, and 2021 respectively.This achievement indicates that the classification results were both rational and dependable, enabling subsequent post-classification change detection comparisons.The high overall accuracy achieved can be implementation of the RF classifier, which effectively minimize errors as the number of decision trees increases (Abijith and Saravanan 2022), and the selection of variable importance reduces the redundancy of correlated variables.It is essential to note that the accuracy of LULC classification within the GEE depends on the availability of high-quality training data. Utilizing ground truth data has proven to be effective in achieving high classification accuracy within the Letaba watershed.Nonetheless, the challenges associated with acquiring ground truth data for validation can be difficult in remote and inaccessible areas.This can result in potential inaccuracies during the classification procedure and minimize confidence in the outcomes.Xiong et al. 2017 automated crop mapping using GEE.This method can quickly generate training data, enabling rapid response to changing land cover conditions, and provide consistent training samples across the study area, reducing potential bias (Verde et al. 2020;Xiong et al. 2017).Automated methods are useful when you need to quickly classify large areas or monitor changes frequently. They work well in situations where ground truth data are scarce or when capturing rapid changes is crucial.Although automated methods in GEE provides an impressive computational infrastructure; it still relies on remote sensing data and may not be a substitute for detailed on-the-ground surveys and local knowledge (Pan et al. 2022).Fieldbased data collection and ground-truthing are essential for validating and improving the accuracy of LULC maps generated through GEE (Kandekar et al. 2021).Automated and field-based data collection methods share the common objective of enhancing accuracy in land cover analysis.Both approaches contribute to reliable classification outcomes by providing representative training samples.Automated methods leverage algorithms to rapidly generate training points from spectral properties, enabling efficient processing of large datasets (Zhang et al. 2021).In contrast, field-based data collection involves on-site observations, capturing contextual details that algorithms might miss (Pande et al. 2018). The process and rate of change over the 31 years were analysed using four time periods with rapid increase in shrublands and built-up from 2% and 2% in 1990 to 15% and 16% in 2021 respectively.The increase rate (47%) in the built-up areas (residential, commercial, roads, and industries) could be attributed to increasing demand for land by growing population rate and development of commercial infrastructure that are taking place in the Letaba watershed (DWAF 2004;Querner et al. 2016).Such expansion of buildings transforms natural landscape which emerge to severe environmental such as loss of natural habitat.This disrupts the connectivity between ecosystems, isolating populations of plants and animals and reducing genetic diversity, making species more vulnerable to extinction (Hailu et al. 2021).The increased impervious surface such as pavements, reduce natural water infiltration, leading to increased surface runoff during rainfall events.This can result in water pollution as runoff carries pollutants from urban areas into nearby water bodies, negatively impacting aquatic ecosystems and biodiversity (Du Plessis et al. 2014;Namugize et al. 2018). Moreover, the plantations also increased from 4% in 1990 to 24% in 2021, which indicates that plantations are the main source of income within the watershed.The increase rate (24%) of plantations in the Letaba watershed can be attributed to rapid growth in demand for arable land (Marks-Bielska and Witkowska-Dabrowska 2021).The expansion of uncontrollable plantation lands often results in deforestation, wetland conversion, or the draining of natural habitats to make way for crops or livestock.This leads to habitat loss and fragmentation, disrupting ecosystems and reducing biodiversity as native species struggle to adapt or face displacement.The intensification of plantations using agrochemicals, irrigation, and monoculture practices can lead to soil degradation, erosion, and loss of fertility.This do not only affect agricultural productivity but also impacts nearby water bodies through runoff of pesticides and fertilizers, leading to water pollution and harming aquatic life (Chen et al. 2014;Uniyal et al. 2020).The population increase has resulted in conversion of bare surfaces and grasslands into built-up areas from the year 1990 to 2021. The decrease rate (À3%) of grassland could be attributed to overgrazing, encroaching invasive tree species which suppress the growth of grasslands and the effects of climate change.The decreasing grassland will have implications on the functioning of ecosystems in the area, climate regulation and depletion of species (Ceballos et al. 2010;Zavaleta and Hulvey 2004).Grasslands play a crucial role in providing ecosystem services such as carbon sequestration, water filtration, and soil conservation.Their destruction can release stored carbon into the atmosphere, reduce water quality due to increased runoff, and lead to soil erosion (Wang et al. 2022). The bushes are encroaching the grasslands with an increase rate of 41%.The invasion of bushes could be attributed to veld fires and climate change effects downstream of the Letaba watershed.The invasion will also alter the vegetation structure and water use characteristics in ways which can reduce the runoff or decrease the groundwater recharge and results in loss of biodiversity (Le Maitre et al. 2020).The findings of grassland being converted to bushlands correspond with the study by Yesuph and Dagnew (2019) at the Beshilo Catchment in Ethiopia.The increased rate of shrub encroachment can fragment and disrupt natural ecosystems.The formation of dense shrub patches may act as barriers to the movement of wildlife, making it difficult for some species to access food, water, or breeding sites (Shiferaw et al. 2019).This fragmentation can reduce gene flow and genetic diversity within populations, potentially leading to inbreeding and reduced resilience to environmental changes (Wang et al. 2022). Comparatively, the decline rate (-4%) of water bodies can be attributed to the significant increase in the plantations along the river and streams, invasive species and lots of constructed dams and weirs within the watershed which is becoming a problematic (Pullanikkatil et al. 2016;Munthali et al. 2019).This relates with the study by Munthali et al. (2019) in Dedza district of Malawi where they observed an increase in plantation alongside the stream at the expense of the water bodies.The reduction of water bodies leads to degraded water quality, loss of critical ecosystem services like water purification and flood regulation, and increased vulnerability to climate change impacts.This threatens the livelihoods of local communities' dependent on fisheries and agriculture, while also impacting the overall health and resilience of surrounding terrestrial ecosystems (Rotich et al. 2022). The study also revealed that natural forests deteriorated from (13%) in 1990 to (6%) in 2021 in the upper watershed.The declining rate (À3%) of natural forest cover could be attributed to the rapid population growth in demand for arable land for food production as it is mostly converted into plantations.The result corresponds with the finding by Agariga et al. (2021) where they observed conversion of forest to agriculture between 1986 and 2020.The decline in natural forest also reduces the size of the carbon sinks, thereby contributing to greenhouse gas emissions (Berry et al. 2010).Bare surface decreased from 44% in 1990 to 29% in 2021 with a decreased rate of À2%.Most of the bare surfaces were converted into built-up and plantations. The changes in LULC within the Letaba are driven by lack regulations, policies, and socioeconomic factors.Therefore, It is essential to understand these drivers and policy contexts behind LULC changes to develop effective strategies for sustainable land management, biodiversity conservation, and environmental protection.Integrating environmental considerations into policy-making processes can help balance human development needs with conservation goals and foster sustainable land use practices (Rotich et al. 2022).The information gathered from this study can be used to develop targeted management strategies to address challenges such declined water bodies, grasslands, and natural forest and promote sustainable land use practices.To manage the socioeconomic implications of LULC changes in Letaba watershed effectively, it is crucial to integrate social considerations into land use planning and decision-making processes.Community engagement, participatory approaches, and consideration of local knowledge can help ensure that the interests and well-being of the local population are considered in development and conservation initiatives.Moreover, balancing economic development with environmental sustainability is essential for promoting the long-term prosperity and resilience of local communities. Conclusion With the right set of variables, the GEE's RF classifier can accurately map and predict the extent and rate of LULC dynamics.The findings revealed that the Letaba watershed experienced significant LULC change between 1990 and 2021.During these years, the watershed has lost bare surfaces, water bodies, grassland, and natural forest.While built-up, shrublands and plantations are on the rise.Built-up, plantations and shrublands are likely to continue growing due to the growing population's demand for settlement and arable land to meet human needs, as well as climate change variability favouring invasive shrubland.Water bodies, natural forests, bare surfaces, and grassland are also expected to decline further as a result of overexploitation of water for irrigation of commercial farming within the Letaba catchment, rapidly increasing population, and invasive shrub species.The lack of enforced regulation and policies to protect natural resources within the Letaba watershed is the primary driver of the large LULC transition.The majority of natural forest conversion to plantation areas will result in deforestation, which will have consequences for ecosystems, human livelihoods, climate regulation, and biodiversity.Grasslands, on the other hand, are being converted to shrublands, built-up areas, and plantations, which has implications for the extinction of uncounted populations and species, as well as biodiversity and climate.As a result, environmentalists, watershed managers, forest managers, decision makers, and stakeholders must act quickly to address issues of environmental degradation.To avoid land degradation, the study recommends that appropriate measures be taken to protect and restore natural resources such as grasslands, natural forests, water bodies, and eradicate invasive shrubs in the Letaba watershed. Disclosure statement No potential conflict of interest was reported by the authors. Figure 1 . Figure 1.Location of the Letaba watershed, South Africa. Figure 2 . Figure 2. the Schematic flow chart showing steps. Figure 3 . Figure 3. Variables of importance percentage derived using the RF variable selection method. Figure 5 . Figure 5. Representation of total area LULC change (gains and loss) on Letaba watershed for the time period 1990, 2000, 2010 and 2021. Table 1 . Land use and land cover classification description. (Azzari and Lobell 2017) the new region(Huang et al. 2017).This may include modifying classification algorithms, adjusting parameters, and thresholds based on the unique land cover types, climate, and land use patterns(Azzari and Lobell 2017). Table 3 . Area covered by each class type for four-time period. Table 4 . the rate of change in LULC classes from the year 1990 to 2021.
7,824.6
2023-09-20T00:00:00.000
[ "Environmental Science", "Geography", "Computer Science" ]
A Typing Discipline for Hardware Interfaces Modern Systems-on-a-Chip (SoC) are constructed by composition of IP (Intellectual Property) Cores with the communication between these IP Cores being governed by well described interaction protocols. However, there is a disconnect between the machine readable specification of these protocols and the verification of their implementation in known hardware description languages. Although tools can be written to address such separation of concerns, the tooling is often hand written and used to check hardware designs a posteriori . We have developed a dependent type-system and proof-of-concept modelling language to reason about the physical structure of hardware interfaces using user provided descriptions. Our type-system provides correct-by-construction guarantees that the interfaces on an IP Core will be well-typed if they adhere to a specified standard. Introduction Hardware Description Languages (HDLs) such as Verilog, SystemVerilog and VHDL are designed to realise both the structure and behaviour of hardware systems.Hardware is modelled as interconnected components (modules) that are connected through ports; ports being individual wires or a collection of wires.Ports carry data, and the flow of data on a port is directional.HDLs abstract over groupings of ports (port groups) as an interface, and present values at higher levels of abstraction such as integers and strings.A component can have multiple interfaces that each send multiple values, and can each be characterised differently.An initiating interface (initiator) initiates communication, and the targeted interface (target) is the recipient of the communication.Modern hardware design is not just about digital circuits, it is also about describing systems of systems.For instance, System-on-a-Chip (SoC) views hardware modules (IP Cores) as boxes connected using well-known and bespoke interfaces.The structure, and behaviour, of these interfaces are described in natural language documents [3,41,4].Such standards documents will present an abstract interface description which is a global view of A rt ifact an interface agnostic to its endpoint usage, and will provide salient structural information (using natural language) about each port in the interface required for realisation in a HDL.Details provided include a port's size, sensitivity, necessity, flow, and dependencies between the details specified.Further, these documents describe behavioural characteristics of the interfaces as a whole.For example, how ports are grouped to describe different channels. While circuit-level designs are required to implement behaviour at a low-level, the designer must also ensure that the components in a SoC design are correctly connected according to the provided specification.Standardised machine readable formats such as IP-XACT capture many of the structural information found within the standards document's natural language descriptions [23].However, interface specifications written using IP-XACT cannot be parameterised, nor can structural dependencies be specified between ports and over interfaces.Further, not all the information contained within a natural language document can be specified using IP-XACT.For instance, IP-XACT does not support the definition of strobe, a signal carried on a separate port that is linked to an individual bit in multi-wire data bus.The number of strobe is dependent on the size of the bus.Conversely, the machine readable specification can present information more clearly than the specification document itself.For example, port necessity for the APB specification is more clearly described in the IP-XACT specification than in the standards document. Generally speaking, there is a disconnect between the description of an interface's structure in a standards document, its representation in a standardised machine readable format, and its enforcement in a HDL.When instantiating these interfaces in a HDL there are no mechanisms to ensure that the characterised interfaces respect the specifications.As a result, mismatches between the specifications and their implementations are common. Contributions The aim of our work is to improve the security and safety of SoC design by utilising state-of-theart concepts from programming language theory to provide greater correct-by-construction guarantees over the structural and behavioural aspects of SoC designs. Dependent type-systems present a rich and expressive setting that allows for precise program properties to be stated and verified directly in the language's type-system [28].Such type-systems also support modelling of resource usage in the style of substructural typing [40,6].By building upon existing work from hardware design we can use these concepts to construct a type-based formal description of abstract interfaces, and formally validate that concrete component interfaces adhere to these descriptions at design-time using type checking. Specifically, we make the following contributions: 1.We present a type-driven modelling framework (Cordial) for reasoning about interfaces on components within a SoC design.2. We show the use of Cordial for describing an exemplar protocol Mungo, and discuss how Cordial can be used to model real-world protocol specifications: APB, LocalLink and AXI [4,3,41].3. We describe the formalisation of our framework in the dependently typed programming language Idris [9] that also constitutes a proof-of-concept implementation. Figure 1 summarises the core constructs that comprise Cordial and their relations.Modelling information is taken from the IP-XACT standard [23] and existing work [30] to construct a model (θ AID ) to represent abstract interface descriptions.Our model construction language (λ AID ) is a simple extension to the Simply Typed Lambda Calculus (STLC) and models parameterised specifications as computable functions, and allows dependencies to be made between signals.The type-system of λ AID follows a substructural design [40,5] allowing correctness guarantees towards labelling of signals to be lifted into the type-system.Model construction is from reduction of λ AID instances to a reduced form (λ redux AID ) which is then evaluated to construct θ AID instances using continuation passing.Concrete interfaces are modelled using θ COMP to present components in a SoC with multiple interfaces. Inspired by notions of global and local types from Session Types [22] abstract interface specifications are treated as a global description that is characterised to a local descriptionθ proj AID .By embedding the projected model (θ proj AID ) into the type of the interface description (θ COMP ) the model's type-system ensures that a local type is satisfied by its global type.Further, the concept of thinnings [1, § 3] captures a specification's optional ports, and allows optional ports to be knowingly skipped. Application of Cordial would see it embedded within existing SoC tooling and to enrich existing HDLs with static design-time mechanisms that would make mismatches between interface specification and implementation impossible and thus reduce errors, increase design productivity and enhance safety and security of the SoC designs.The transformations of specification instances, and model projections would be automatic and hidden from users.Protocol designers would have a tool (based on λ AID ) to design interface specifications.During the SoC design phase SoC designers use these specifications to annotate their components (θ COMP ) and ensure their port selections are correct. Outline Section 2 presents a running example that further motivates our work.Section 3 introduces our model for abstract interface descriptions (θ AID ) and the specification language (λ AID ) used to construct θ AID model instances.Section 4 details our model (θ COMP ) for describing concrete components and how projected θ AID instances are used to type-check interfaces.Section 5 briefly describes the formalisation of Cordial in Idris, and Section 6 considers use of the framework to model real-world interaction protocol specifications.Section 7 discusses the efficacy of the framework, and considers related work.The paper concludes with a discussion over future work in Section 8. Notation. For simplicity the syntax for standard algebraic types are abstracted over.Similar abstractions are used for dependent types.Single-field variant types are presented with a constructor name as the label and the body being a n-ary tuple.Where possible simple typing rules are embedded within the presentation of abstract syntax and types.Model types are denoted using blackboard style letters.Types from construction languages are denoted using uppercase Greek letters.Constructs subscripted with: d are from θ AID ; and p are from θ proj AID . E C O O P 2 0 1 9 2 The Mungo Protocol Presentation of Cordial will be aided through consideration of an exemplar protocol (Mungo) that captures salient physical properties common to many interaction protocols. 6:5 Figure 2 shows how Mungo can be realised in SystemVerilog.The interface is parameterised, however, SystemVerilog only allows such interfaces to have a single default parameter.Mungo has multiple default parameters.Two characterised interfaces for both an initiator and target are presented as modports.Error related signals and clock information are optional.Interfaces can take many other valid structural forms.SystemVerilog supports unrestricted use of dangling ports, in which a receiving port is unconnected.In these cases the value received is taken as the default value as dictated by the port's type.A designer can also deviate from the specification and make required ports optional.When connecting two modules together that support Mungo the wrong interface might be left out.Further, not all HDLs support the concept of dangling ports.Figure 3 presents a second use of SystemVerilog to declare two components that support Mungo, and connect them together.Within Figure 3 we present a visualisation of the module interconnections.Within this example, the initiating component has two dangling ports for optional error reporting, and we have deviated from the specification in using different labels.SystemVerilog provides name and positional oriented connection of modules.Ultimately, the programmer is responsible for ensuring that the ports are connected correctly, and can wire or name ports freely.We need to ensure that the interfaces on a module are valid against their respective specifications. Abstract Interface Descriptions This section presents a model (θ AID ) for reasoning about abstract interface descriptions, together with a language (λ AID ) for model construction.How λ AID instances are transformed into θ AID instances is also described.Taking inspiration from IP-XACT [23], abstract interfaces are modelled as a named-tuple of port descriptions and other metadata.This is a common approach as seen by existing work [30,19].For each port a variety of emergent properties are also tracked.Dependent types control invariants over model structure and property values.θ AID model instances are not parameterised, the construction language (λ AID ) facilitates creation of parameterised specifications and ensures models created use unique labels using substructural typing. Properties Ghica et al. [19] modelled ports according to their size and signal direction.However, there are other important properties as shown by McKechnie [30].Ports are uniquely identified using labels.Similar types of ports share similar behaviour.A port can: communicate data; provide addressing information; provide clock ticks; trigger a reset; signal an interrupt; indicate control; provide port-level behavioural information; or is used in a general sense.Differentiating between these behaviours is important when connecting two (or multiple) ports together.Not all ports in an interface are required, and how a port responds to changes in signal (sensitivity) should also be captured. For interfaces, salient properties concern the style of communication.Does the interface expect to interact with a set number of other interfaces, or interact directly with another interface? Model Components Figure 4 Common terms and their types. Figure 4 presents the shared terms and types used throughout the models and languages presented.Numbers originate from the set of natural numbers greater than zero.Port labels are specification dependent and assumed to be typed enumerations.Signals are either sensitive or insensitive.Sensitive wires are level sensitive (high, or low) or edge sensitiverising or falling.Signals either originate from a system interface, or from another component -IP Core.An interface's communication style is either broadcast or unicast. Of interest is how a port's type and label are modelled.A "kind" provides type-level disambiguation between different kinds of labels and ports.θ AID & λ AID support several types of port, and different port types have different shapes.Data and address ports will always be an array of ports.Clocking information, resets, interrupts, and control ports will always use a single wire.General and information ports can have either shape.When describing widths, the shape of the port will dictate possible values. Ports must be labelled, however, they can also share a common name with a fixed set of other ports -cf.strobes in APB and AXI.A label is either named and is used once, or indexed and used i times.To prevent ambiguities between different label families, the type for labels is indexed with the type associated with the underlying name used. A Model for Abstract Interfaces Figure 5 presents the core modelling constructs, and typing rules, for θ AID model instances.Within θ AID , signal flow is directional.Signals flow from: initiator to target ( ); target to initiator ( ); bidirectional ( ); always received ( ); or always produced -( ).Ports can be completely optional ( ? ), target optional ( ?t ), initiator optional ( ?i ), or are required -( ! ).Wire ports have width (1 d ), and array ports have width (w d (n)) where n is greater than one.Ports can be specified with an arbitrary width -(∞ d ).The type for port widths is paramterised by a port kind.This enforces the relation that ports will have the correct width for their kind i.e. a wire can only have length one. A port description is a named tuple comprising of the port's label (l), kind (k p ), type (t), flow ( f ), necessity (o), width (w), sensitivity (s), and origin -(h).The type for ports is a type synonym for the following dependent function: Dependently typed terms allow for an invariant to hold during term construction.The port kind associated with a port type and width, must respect the specified port kind.Thus, if the port has kind WIRE then its width and type must be suitable for a wire.Further, the port type itself (P d (L)) is indexed by the type associated with label. Ports are grouped in a cons-style collection (ps : PG d (L)) whose type is also parameterised by the type associated with labels.All ports in a group must have the same type of label.An abstract interface is a named tuple containing the interface's communication style, max number of initiators and targets, and a collection of ports. Example Figure 6 presents a θ AID instance for Mungo -Table 1.An enumerated type provides labelling information.θ AID instances are, however, not parameterised.Mungo is an interface that can be instantiated with several address and data bus widths.The example instance for Mungo in Figure 6 provides holes ( ) in place of precise widths.Exact values for widths must be presented.Further, there are no restrictions on label use, one can easily duplicate the use of a name.The next section presents a language to present parameterised specifications and ensure label uniqueness.DWU J. de Muijnck-Hughes and W. Vanderbauwhede 6:9 Specifying Interface Descriptions Figure 5 presents a model instance that is dependently typed, however, the model design itself has several limitations.First, labels are not required to be unique.Second, model instances cannot be parameterised.We address these issues through creation of a description language λ AID .An extension of the STLC, λ AID describes the construction of model instances.Specifications are a sequencing of port descriptions, and other metadata.Function, and application, in λ AID provide parameterisation of specifications, and descriptions of structural dependencies.Evaluation of λ AID , using continuation passing, constructs instances of a θ AID model.A substructural type-system provides further correct-by-construction guarantees that labels are unique.Construction semantics detail model instance construction from λ AID programs. Counting Label Usage Substructural type-system's extend existing type-systems with extra information [40].Labels in θ AID instances are required to be unique.The type-system for λ AID is designed to ensure that label usage is linear: A label can only be used once. Inspired by the work of McBride [29] we utilise a "rig" to capture label usage.For our bespoke use case a rig of the same style is not required.McBride's rig is for computation (addition and multiplication), and our rig is for usage accounting only.We define our rig as: and "Pure" form a monadic computation context in which the labels and their usage are the computation in context.Although, sequencing is presented separately from "Let"-bindings, sequencing can also be described as a "Let"-binding where ω is bound to 1 . The term stop denotes the end of a specification such that all labels are used.Terms are presented to represent functions, and function application.Predicated versions of functions and application exist to restrict parameters of type N * to predefined sets of whole numbers.Whole labels are created using a single term.Port declarations are similar to port construction in Figure 5a, except that rather than a direct label, port descriptions must take a label variable.There are terms for setting communication style, and max number of initiators and targets.Within λ AID , labels are not indexed.Ports with an indexable label are indicated using replicate. A simple arithmetic language with binary operators to operate on whole numbers is embedded within λ AID .Supported operations are addition, subtraction, multiplication, and division.With this, user provided widths can be used to construct arithmetic dependencies on the number of ports in a specification.This is described using replicate.Allowing for data dependent port specifications (i.e.strobes) to be supported.Figure 8 presents the types for λ AID .Here L is a placeholder to represent a user defined set of labels.Several types are taken from existing constructs (Figures 4 and 5a) without the type for labels, ports, port groups and interfaces.Three new types are introduced.First is the unit type (1) to represent terms that do not represent computations.Second is the type for label variables (Λ (L, u)), indexed by the type of the underlying label value and parameterised with usage information from the "Rig o' 2".Function types follow the standard definition, and predicated functions are restricted to acting on whole numbers. Type-System Within λ AID well-typed contexts (Γ) comprise of name type pairings.Contexts can be extended using (+), and named terms updated using (±). Section 3.4.3presents the typing rules for λ AID .For brevity the typing rules for maths expressions are not provided.Like the syntax definition for λ AID itself, the typing rules follow that of the STLC, but with extensions for describing abstract interfaces.Rules Var,Lam, App, Let, Pure, and Seq follow standard conventions with one noticeable difference that follows from the work of Atkey [5]: Usage information associated with labels presents stateful information.The monad form by "Let", "Seq", "Unit", and "Pure" is a Hoare Monad that allows state information for label usage to be threaded through the entire computation [8].The notation Γ old e Γ new represents updating the context from Γ old to Γ new .The context will change only if the rules are well-typed.Where the notation is not used implies the context does not change.Predicated functions, and their application, mirror their plain counterparts but have a side-condition that requires the predicate to hold true for type-checking to be successful. The typing rules for labels and ports use the "Rig o' 2" to instantiate and augment the usage count for types for labels -Λ (L, u).These are the type-level computations that enforce correct label usage.Rule LBL presents the typing rule for label creation, initialising the type with usage free.Rule Stop describes the end conditions for λ AID programs, and results in erasure of the context.λ AID programs will successfully type-check only if all label variables have been used.Rule Port specifies a new port, and consumes labels.A label ω with type Λ (L, u), and usage u can only be used if the usage is free.If the label is available to use the type for ω in resulting computations will be Λ (L, used), and thus be unavailable. Replication of a port description (Rule REP) details that a port will be replicated if the number of replications is greater than two.The remaining rules detail the simple typing rules for the remaining terms.Figure 10 demonstrates how Mungo is specified using λ AID .The concrete set of labels from Figure 6 are reused.The specification for Mungo is parameterised through function specification.This is reflected in the specification's type signature.The use of the specification to create θ AID instances is also restricted to values of x ∈ {32, 16} and y ∈ {8, 4}.A type error will occur if values for x and y were chosen that were not in the provided sets.The type signature also shows the initial and end contexts for the function; start with nothing, end with nothing.Different specification instances are generated through application of the function to different values. Building Models from Specifications In this section the set of transformations to construct θ AID instances from λ AID programs are described.We first reduce λ AID programs to core terms, and use continuations to represent model construction as evaluation of a reduced λ AID program. Figure 11 presents the reduction rules for reducing λ AID programs to core terms following standard conventions.Reduction of N * values mirrors reduction of natural numbers but with smallest value being one not zero.The reduced version of a λ AID program is called λ redux AID , and the reduction of a λ AID program l to its reduced form (l ) by the operation l ⇓ redux l .Much like the STLC reduction of λ AID programs will also be strongly normalising.To construct θ AID model instances, the reduced form, λ redux AID , is first transformed into a continuation (λ cont AID ) in the style of Hatcliff and Danvy [21].This transformation is denoted using l cont , where l is an instance of λ redux AID .Using this approach we can make model construction more easily checkable for termination, and model construction comes from evaluation of λ cont AID instances.For brevity we do not provide the definitions of λ cont AID and l cont , and remark that they follow standard contructions [21]. Evaluation of λ cont AID instances, which we denote using ⇓ cont , transforms: label variables into labels; port declarations into port descriptions; and repeated port declarations into port groups.Each evaluation of the accessor functions for setting description values replaces the previously see value, and a default set of values are supplied initially.When evaluated, the continuation returns a tuple containing the final interface metadata and collated port groups. The complete steps to construct a θ AID from λ AID are defined as: Definition 2 (Construction of θ AID from λ AID ).Let m be a θ AID model instance, and l be a λ AID program.The construction of m is defined as the reduction of l to an instance l of λ redux AID .This instance l , is then evaluated to an instance of λ cont AID that is then reduced using ⇓ cont to produce m: Specifying IP Core Interfaces Section 3 described the specification and construction of θ AID instances, these are abstract interface descriptions.This section looks at the specification of components in a SoC design and how guarantees are made that the physical interfaces satisfy given θ AID instances. A model for reasoning about components (θ COMP ) is introduced.Each component comprises of a set of physical interface models whose interfaces are satisfied by a θ AID instance.Interface satisfaction is a two-step process: first a θ AID instance is projected ( ) using the specified endpoint e, creating the projected model θ proj AID .Second, the projected model is used in the type of an interface to provide a type-level invariant that the presented interface satisfies the projected model.Dependently typed model terms ensures that if an interface is well-typed then the interface satisfies the provided specification. Projecting Abstract Interfaces Figure 12 presents the terms, and salient types for a projected interface model instance. A projected interface represents the local view of an abstract interface.The structure of a projected interface mirrors that of an abstract interface, and values only differ w.r.t. a port's signal flow (direction), and necessity.The type's for ports, port groups, and interfaces are parameterised by the type of labels associated with ports, the endpoint that the term was projected to, and the originating abstract interface.The types are indexed with the originating term to allow for structural invariants, for example a port's shape, to be specified. -cf.Section 3.3.A projected port is either unidirectional in receiving (+) or sending signals (−); or is bidirectional -(±).A port is required ( ! ) or optional ( ? ).The types for directions and necessity are both dependent, each containing the endpoint being projected and the original projected value.These values are not free to choose.Figure 13 presents the typing rules that constrain the values in the projected model to predetermined pairings.Dependent types ensure the correctness of projection transformation.The projection for port flow and necessity are defined as functions (( d ) and ( n )) that, given a flow or optional description will compute the required direction -or necessity.Their type signatures are: These projection functions will only type check if the given inputs match the allowed pairing of values for the returned type.Figure 14 presents the remaining typing rules for projected interfaces, ports, and port groups.Like the structural definition, the typing rules mirror those for their abstract counterparts, and are indexed by the type associated with labels.However, the types are further indexed by their abstract counterparts, and also by the endpoint the term is being projected under.The θ AID instance provides as a type-level meta-model from which information is sourced.This approach allows for several invariants on the structure of the projected interface to be established.Figure 15 presents the projection semantics for projecting θ AID instances to θ proj AID instances.The type signatures for each projection function are omitted for brevity, but they follow those for ( d ) and ( n ).By design, projections and their invariants are well-typed.Malformed projections will fail to type-check, for example if the widths or calculated directions are wrong. Example Figure 16 presents example projections for the θ AID instance for Mungo, applied to parameters 32 and 8 using predicated function application.Figure 16a shows a projection for an initiator interface, and Figure 16b (Mungo $ {32,16} 32 $ {8,4} 8) AID λ →θ i Target iface p (Unicast, 1, 1,port p (Named(C), WIRE, Clock, +, ? ,1 d , High, System) :: p port p (Named(R), WIRE, Control, +, ! ,1 d , High, IP) :: p port p (Named(W), WIRE, Control, +, ! , directions for each port are mirror images of each other, aside from the constant direction for the system clock and the bi-directional data port.Further, the definition of a port's necessity have been calculated to respect if the port is optional or not.The ports for returning error information are optional if the interface's endpoint is a target interface and required if the endpoint is an initiator. Specifying Physical Interfaces Abstract interfaces, and their projections, represent descriptions of a component's interface, we must also model the component itself.Figure 17 presents the terms and types for θ COMP model instances.Figure 18 presents the typing rules.The structure of concrete interfaces mirrors that of the abstract interface and projection.However, a concrete interface does not have optional ports, within our model dangling ports are not allowed.To model skippable ports the concept of thinnings is used [1].A thinning allows for structures to be weakened using some decision procedure [13,2].We can use this concept to weaken the specified ports in an interface's portgroup w.r.t. a given specification.Our decision procedure is simple: a port can be skipped if the projected port is optional.The 6:17 W-Z i : N * shape : Figure 18 Typing rules for concrete interfaces. E C O O P 2 0 1 9 6:18 A Typing Discipline for Hardware Interfaces thinning decision procedure does not occur at the value level.Thus a concrete port group is either: empty -∅; extended by a port (::); skipped by an optional port ( ); or an optional port is skipped when extending the group with a port -(::≈).The operator (::≈) can be defined as the combination of the (::) and ( ) operators.That is p :: ( ps) ≡ p ::≈ ps.The typing rules (Figure 18) show how thinning works for interface specifications.The θ proj AID is a type-level invariant, the specification of the necessity of the projected ports is what allows the thinning to occur, or not. A component is modelled as a collection of interfaces.The type for a component is indexed by a collection (xs) of θ AID -endpoint pairings.The type for a collection of interfaces is indexed by the separated elements of each pair in xs.As a collection of interfaces is constructed, the θ AID instances are projected (at the type-level) by the endpoint type to construct the θ proj AID instance indexing the collected interface.Use of projection at the type-level ensures that the type for a concrete interface is sourced from the specified projection. In θ AID model instances, wires have width one, arrays have fixed width greater than one, or are unrestricted.When projecting an abstract port, the width does not need to be projected into a local value.However, the port width in a θ COMP instance needs to respect the width in the θ AID .Widths are modelled using a dependent data type that captures, and reasons with, the width of an abstract port. An instantiated port with an abstract port and an unrestricted width has no restrictions on kind or width.A port with an abstract port of width one must also have a width of one.Similarly, a port with an abstract port of fixed width must also have the same fixed width. Example Figure 19 presents two example interfaces that model Mungo.Figure 19b shows the interface with a port to receive the clock, and Figure 19c without a clock port.Within Figure 19c the skip term ( ) allows for the clock to the be skipped.If other required ports were to be skipped this will result in a type error.For both interfaces we have chosen the user defined error message width to be of width 32 and 16.This will not cause a type error as the Mungo allows the signal to have a user-defined width.Further, should other value level information (e.g.port width and label) be incorrectly specified the example will also fail to type-check. Type-Checking Interfaces The type of interfaces in θ COMP are parameterised by projected interfaces from θ proj AID .A satisfaction relation is defined to link programs written in λ AID to interfaces from θ COMP .Definition 3 (Interface Satisfaction).Given a λ AID specification ( ν : 1 ), and an interface ϑ : I (L, e, ϑ p ) then the interface ϑ satisfies ν, which is defined as Implementation We have realised Cordial using Idris, a general purpose dependently typed programming language [9].This provides both a practical proof-of-concept implementation, and mechanised formalisation that the framework's type-system holds.The models representing interfaces, port groups, and ports translate directly to standard dependently (and non-dependently) Type for both Interfaces. Case Studies Using our implementation we have constructed models for several interaction protocols of varying sizes.We describe the structural properties of the protocols and report on their modeling in Cordial as λ AID programs. ARMs Advanced Peripheral Bus The first protocol considered was ARMs Advanced Peripheral Bus (APB) [3].The protocol is a legacy protocol comprising of at least eleven signals, and can be connected to many target IP Cores using an intermediary IP Core called an interconnect.This requires that we construct two specifications one for target connections to the interconnect, and the other for The data and address width can be up to 32 bits in width.There are three interesting features of APB worth noting.First, the specification is parameterised depending on the number of IP Cores that an initiator is connecting too.When connecting to x targets, the signal PSelx will be replicated x times in an initiating interface, which will only be seen once in a target interface.Second, the protocol has optional signals for the clock and a signal to enable the clock.The dependency between the two clock signals is not clear from the specification: Can one be skipped when the other is required?Third, the specification requires a set of strobes that connects to every 8 th bit on the data bus.This restricts data widths to multiples of eight. We chose to implement the specifications using two predicated functions, and one normal function.The two predicated functions was used to ensure that all bus widths are multiples of eight and less than 32 i.e. 8, 16, and 32.The normal function was used to represent the number of targets.Specification of labels and ports followed the known information taken from the specification.The term replicate was used to ensure that the correct number of PSelx signals were generated.For strobes, the maths operations were used to calculate the number of strobes required, based of the size of the data bus.Figure 20 shows part of the APB specification detailing the predicated functions, use of replicate, and strobe specifcation. Xilinx's LocalLink The second protocol chosen was a legacy protocol from Xilinx called LocalLink [41].The LocalLink protocol requires seven signals and thirteen optional signals.As with the APB protocol many of the signals were encoded without complications.However, and unlike APB, LocalLink does not specify a set of bus widths and requires that the presented bus width is a multiple of eight.Predicated functions require that the list of possible values are known a priori.Thus, we modelled the specification as a predicated function on bus widths of 8, 16, and 32, together with a normal function that takes the number of channels. The LocalLink specification contains various size dependencies based on the data bus.The size of the remainder bus is dependent on how they are implemented within the IP Core.Specifically, if the remainder is encoded then the bus size will be log 2 d 8 − 1 .If the remainder is masked then the size is d 8 − 1.Our framework is not expressive enough to encode these differences mathematically, we support simple arithmetic operations and these operations are not simple.Moreover, our framework does not support related specifications that overlap to be collapsed into a single definition.The LocalLink specification is too value dependent. Another interesting aspect of the LocalLink specification is that of channels.These are a set of optional signals whose flow is dependent on the application context.The size of channel related signals are too calculated using a floating point operation.These three signals are directional, and whether they send data from target to initiator is dependent on the application, and how the other two signals are specified.Although we can represent these channels as being bidirectional our language is not expressive enough to capture these application specific, and inter signal, dependencies. ARMs Advanced eXtensible Interface ARMs Advanced eXtensible Interface (AXI) is a widely used family of interaction protocols [4] for transferring addressable data between IP Cores.There are several previous versions of AXI each building upon previous versions.Each version differs by number of signals and changes to specific signal properties.The AXI specification defines three protocols that offer three different interaction styles: Full, Lite, and Stream.The protocol can be directly connected to another interface, or it can be used to connect to multiple other IP Cores using an interconnect.We report on describing version four of the AXI protocol for direct interfaces only.For versions of the protocol that connect to an interconnect, the techniques presented in Section 6.1, can be leveraged.The specification was modelled in λ AID using two predicated functions to control the width of the address and data busses.Each of the signals were translated into the required port descriptions.Like the APB protocol, AXI has the concept of strobes.The same technique as used in Section 6.1 was used. We chose not to include user-defined signals in this study.Cordial requires that signal details are known a priori.Ideally, designers would create parameterised specifications (functions) that take as parameters the user-defined signals.However, functions in λ AID take pure values as parameters, port declarations modify the typing environment to update label usage information and are thus not pure.This is a restriction from the substructural type-system for λ AID -see Section 7.3. The protocol specification divides the signals among several channels.Such a grouping is only required for the specification.At the module level these groupings do not exist.Although, Cordial does not support this grouping incorporating this into the framework would be a potential benefit for reasoning about subgroupings of an interface's portgroup. Discussion & Related Work This section discusses the efficacy of the framework, and related work. Discussion Section 6 described three case studies that modelled three interaction protocols using λ AID . For each of the protocols we were able to encode most of the ports correctly Cordial is suitable for capturing each port's values.However, both LocalLink and APB illustrated the limitations of λ AID in capturing all of the specification's dependencies.While Cordial can capture limited value dependencies, for example strobes and number of targets within APB, the framework prohibits the construction of concise descriptions based on other value dependencies.Specifically, for ports whose direction is dependent on a mode of operation (LocalLink), and how to version protocols -AXI versions 1-4.Although we can write multiple different versions, a dependently typed construction should be able to capture these properties concisely, and without resorting to copying and pasting protocol specifications.We need to explore what other dependencies there are within a protocol specification, how prevalent these dependencies are, and how we can capture and reason about the relevant dependent properties. The construction of λ AID exposes the resource tracking of the type systems directly as resources are associated with variables.This does leads to a more verbose language such that for n signals there will be n variables, and n additional ports declared.This is not optimal.An alternative approach would be to embedd the resource tracking directly in our monad's type to remove the need for variables-cf.Swiestra's Hoare & Atkey's parameterised monads [37,5].However, our current formulation is more extensible allowing arbitrary new states to be added by indexing the type of new variables. The type-level resource tracking in λ AID also prohibits the creation of higher-order descriptions.The typing rule Lam requires a pure value, and sequencing using Let and Seq require that the knowledge contained in the environment is passed from the previous construct to the next.This is a limitation of the Hoare Monad used to sequencing expressions. Modelling Hardware Interfaces Many attempts at reasoning about hardware have centred on formalising hardware systems as a collection of digital circuts and capturing the behaviour of signals through the specified circuits.Ghica et al. exploited category theory to investigate connection of components [18,17,19].EDSLs have been developed for Haskell such as Lava [20] and Cλash [35] that take other mathematical approaches to reasoning about hardware behaviour.ΠWare utilises dependent types to reason about hardware [15,16].Vijayaraghavan et al. [38] presents a complete formalisation of the behaviour of SoC designs, however, their approach does not look at the validation of interfaces against a specification, and concentrates on modelling the behaviour of components as a distributed system.Our framework complements existing work by providing guarantees about the physical structure of a component's ports. Tooling such as Vivado IP Integrator [39] and Kactus2 [24] can automatically construct, and connect, components in a SoC architecture correctly.Such tooling is based on IP-XACT and vendor extensions.Examination of the Vivado toolchain reveals handwritten TCL scripts bespoke for the AXI family of protocols.Our work presents a specification agnostic framework for type-checking hardware interfaces against a richer specification than seen with IP-XACT solutions.We position our work as possible foundation for machine derivable code to develop richer integration and construction checks to that seen with IP Integrator and Kactus2. Click is an untyped SoC design language for describing the routing of data [25].McKechnie [30] developed a type-system for typing the interconnections found within Click specifications.Our work provides a natural extension to McKechnie's work and provides a means to type components in a Click design against external specifications. Substructural Typing The substructural type-system for λ AID is based upon Hoare logic [5,8].Unfortunately, Hoare logics do not support the frame rule, a means to divide and share invariants in a composable manner.This results in λ AID not being able to support higher-order descriptions. Separation Logic is an advancement that does support the frame rule [34], and has been used to construct substructural type-systems for EDSLs [26], However, it is not clear how straightforward it would be to realise such a type-system for an EDSL within a dependently typed language. There are other formal models upon which one can realise substructural type-systems for EDSLs, namely TypeStates [31], and Refinement Types from Hoare Types [8,32].All allow for reasoning about type-level resource usage protocols, however, how straightforward these models can be realised within a dependently typed language is not clear. λ AID was realised as an EDSL, perhaps realising it as a standalone Domain Specific Language (DSL) written in Idris might allow for Idris' rich type-system to better realise the substructural typing for λ AID .Future work will be to investigate how to realise, and implement, the substructural typing for λ AID . Implementing Cordial Cordial has been implemented within Idris.Any other dependently typed language that supports full-spectrum dependent types, such as Agda [33], would also be suitable host language. Although Cordial uses dependent type theory and substructural typing, non-dependently type languages can also realise the framework.The ideas are transferable, but the implementation would not be as clean nor concise.Racket is a general purpose language that supports EDSL creation through fine-grained control over the language's type-system [14].F is a general purpose language with value-dependent types [36].Whereas Idris provides full-spectrum dependent types, F provides value-dependencies using refinement types.This provides a novel, alternate, environment in which to construct "value-dependently-typed" programs.How the approach behind Cordial is transferable to these languages is worth investigating. Conclusion We presented a framework (Cordial) to provide correct-by-construction guarantees over interface specifications in SoC designs.We have demonstrated use of the framework to model real world protocols, and noted limitations in the models expressiveness and future work to enrich said expressiveness.There are other areas for future work: Checking Existing Systems Our approach lends itself well to the generation of designs from model instances.We can easily extend our Idris implementation to generate stubbs for various HDLs.However, how do we evaluate existing interfaces?To do so, not only do we need to be able to extract interface model descriptions from existing HDL code, but also associate these model descriptions with abstract interface model descriptions.That is, we need to be able to infer from a component specification the concrete interfaces, and for those interfaces their abstract descriptions and which characterisation corresponds to the found interface.The problem of model inference is difficult as component interfaces are not always cleanly defined.Multiple interfaces for a component can be presented as a flat port group, sending ports can send to multiple recipients, and the ordering of ports does not necessarily reflect the ordering in the specification.Further, the names given to ports may not match the labels described in the specification.Developers, and code generation tools, have complete freedom in structuring their components interfaces.Further work will be to explore how to infer models from such "messy" SoC descriptions. Enriching existing HDL We have formalised Cordial in an existing general purpose language.A more interesting area for future work, and to increase adoption of the ideas mentioned, would be to develop extensions for various HDL such as SystemVerilog with the presented framework.A similar approach would be to extend existing design environments such as Vivado from Xilinx to incorporate our tooling and ideas. Checking Behaviour Our solution reasons about the structural correctness of SoC architectures.These provided guarantees are a design time check.Standards documents also describe a protocol's behavioural correctness.Our models do not capture a component's behaviour, a correctly connected component may show incorrect behaviour (as described in the specification) at run time.We saw this when modelling the AXI family of protocols.Cordial borrows notions of global and local projections from Session Types.We could also look to use Session Types to reason about hardware behaviour.While there have been attempts at extending Session Types to fit communication models similar to those found in hardware [27], none have been directly applied to checking hardware.Future work will be to explore how we can extend our model descriptions to capture the behaviour of a component's interface. Modelling complete SoC architectures SoC designs are about connecting components.A natural extension to our work would be to provide an orchestration language that uses θ COMP to model components and their connections.Existing work has investigated verifying IP Core connections using static typing to ensure substructural properties of a SoC design hold -Section 7.2.Integrating the work of McKechnie [30] into the expressive type-system of our framework can serve as the basis for a more complete solution to SoC design.We leave this aspect of future work as an open problem. Figure 1 Figure 1 Relationships between various languages, models, and intermediate representations. Figure 3 Figure 3 SystemVerilog module definitions adhering to Mungo, and signal flow indicators. ( a ) Terms and types. Figure 5 Figure 5 Definition of θ AID . Figure 6 Figure 6Mungo as a partial θ AID instance. free} be a set, with an operation use(u) to change a u ∈ R as follows: use(u) free → used used → used 3.4.2Terms e i | n | r | f | k p | s | h | c Constants | (add e e) | (sub e e) | (mul e e) | (div e e) Maths | 1 | ω Unit Values & Variables | e; e | e | let ω be e in e Statements Figure 7 Figure 7 Terms for λ AID . Figure 7 9 6: 10 A Figure7presents the terms for λ AID .Common structures from Figure4are included except for the terms for labels.Terms can be sequenced, and bound to variables using let-bindings.Pure values are indicated with e .Combined the terms "Let", "Seq", "Unit" Figure 8 Figure 8 Types & typing context for λ AID . Figure 11 Figure 11 Reduction rules for λ AID . Figure 13 Figure13 Typing rules for flow and necessity projection. Figure 14 Figure 14 Typing rules for θ proj AID . Figure 15 Figure 15 Projection semantics for θ proj AID . Figure 16 Figure 16 Mungo projected as θ proj AID instances. Figure 17 Figure 17Terms for concrete interfaces. Proof. By construction.The evaluation of ν ( ν ) produces a model (ϑ d : I d (L )), for some (L : Type L ).The type of ϑ is I (L, e, ϑ p ).The projected interface ϑ p has type I p (L, e, ϑ d ).If ϑ d ≡ ϑ and L ≡ L then ϑ d i e ≡ ϑ d i e.If ϑ d ≡ ϑ d or L ≡ L then ϑ would fail to type check. Figure 19 Figure 19Sample θ COMP instances that show port skipping. E C O O P 2 0 1 9 6: 20 A Typing Discipline for Hardware Interfaces initiating connections.At least seven signals are required, and two were target optional.At least seven signals have a known width. Table 1 Signal descriptions for Mungo. Table 1 presents the signal descriptions (abstract interface description) for Mungo.Behaviourally the protocol represents the reading and writing of data from the initiating IP Core to the target 1 .Mungo provides unicast style communication, it does not support broadcast communication through a shared bus.A system clock (SYS_CLK) can send signals to both the target and initiator.The clock is optional as the clock source for the specified component might not go through this interface.Reading and writing are dictated by the initiator using control wires CTRL_R and CTRL_W.A data bus is bidirectional and data can have a width of 32 or 16 bits.The address bus is eight or four bits in width.Error reporting is optional where: ERR_MODE indicates the type of error; and ERR_INFO is the message itself.The width of error messages are left to the implementer.All wires have high sensitivity. 2 0 1 9 6:14 A Typing Discipline for Hardware Interfaces PG p (L, t, ps d ) ∅ p | p p :: p ps p Portgroup i p : I p (L, t, i d ) iface p (c style , n, n, ps p ) Interface e p e d | t | d p | o p | p p | ps p | i p Expressions T p T ∈ T d | E | D (t, f ) | O (t, o d ) Types | P p (L, t, p d ) | PG p (L, t, p d ) | I p (L, t, i d ) p : P p (L, t, p d ) port p (l, k p , t, d p , o p , w d , s, h) Port ps p : proj AID . ps p : PG p (L, e, ps d ) iface p (c, maxI, maxT, ps p ) : I p (L, e, iface d (c, maxI, maxT, ps d )) PP l : L(L, k l ) k p : K P t y : A (k p ) d : D (e, f ) o : O (e, o d ) w d : W d (k p ) s : S h : H port p (l, k p , t y, d, o, w d , s, h) : P p (L, e, port d (l, k p , t y, f , o d , w d , s, h)) p : PG p (L, e, ∅ d ) p : P p (L, e, p d ) ps p : PG p (L, e, ps d ) p p :: p ps p : PG p (L, e, p d :: d ps d ) IP c : C style maxI : N * maxT : N * 1 . Non-projected values must match.2.Values parameterising the type of projected values are sourced from the abstract description and must match.3. The endpoint that terms are being projected under must match.4. The structure of the projected interface must match the structure of the abstract interface. θ AID → θ proj AID port d (l, k p , t,d p , o p , w d , s, h) p e port p (l, k p , t, (d p d e), (o p n e), w d , s, h) ∅ d g e ∅ p p d :: d ps d g e (p d p e) :: p (ps d g e) iface d (cstyle, a, b, ps p ) i e iface p (cstyle, a, b, (ps p g e)) Version four of the protocol requires 47 signals comprising of: two global signals for the clock and a reset; thirteen signals each for specifying writing and reading of an address; seven signals for reading and writing data; and five signals for writing responses from the target to the initiator.Of the 47 signals, 36 are required and eleven are either optional, target optional, or initiator optional.Several signals have user defined widths.The AXI protocol is parameterised such that the address and data busses can be: 8, 16, 32, 64, 128, 256, 512, or 1024 bits wide.Further, the protocol specification supports custom sets of signals that are completely user definable.In fact the AXI standard explicitly warns against their use due to potential interoperability issues if different modules present user-defined signals that behave differently.
11,827.2
2019-07-10T00:00:00.000
[ "Computer Science", "Engineering" ]
Some Identities Involving the Fubini Polynomials and Euler Polynomials In this paper, we first introduce a new second-order non-linear recursive polynomials Uh,i(x), and then use these recursive polynomials, the properties of the power series and the combinatorial methods to prove some identities involving the Fubini polynomials, Euler polynomials and Euler numbers. Introduction For any real number x and y, the two variable Fubini polynomials F n (x, y) are defined by means of the following (see [1,2]) The first several terms of F n (x, y) are F 0 (x, y) = 1, F 1 (x, y) = x + y, F 2 (x, y) = x 2 + 2xy + 2y 2 + y, • • • .Taking x = 0, then F n (0, y) = F n (y) (see [1]) are called the Fubini polynomials.If y = − 1 2 , then F n x, − 1 2 = E n (x), the Euler polynomials, E 0 (x) = 1, E 1 (x) = x − 1 2 , E 2 (x) = x 2 − x, and 2 , E 6 = 0, and E 2n = 0 for all positive integer n.These polynomials appear in combinatorial mathematics and play a very important role in the theory and application of mathematics, thus many number theory and combination experts have studied their properties, and obtained a series of interesting results.For example, Kim and others proved a series of identities related to F n (x, y) (see [2][3][4]), one of which is T. Kim et al. [5] also studied the properties of the Fubini polynomials F n (y), and proved the identity where S 2 (n, k) are the Stirling numbers of the second kind.Zhao and Chen [6] proved that, for any positive integers n and k, one has the identity where the summation is taken over all k-dimensional nonnegative integer coordinates (a 1 , The sequence {C(k, i)} is defined as follows: For any positive integer k and integers 0 Some other papers related to Fubini polynomials and Euler numbers can be found elsewhere [7][8][9][10][11][12][13][14][15][16][17][18][19], and we do not repeat them here. In this paper, as a note of [6], we study a similar calculating problem of Equation ( 2) for two variable Fubini polynomials F n (x, y).We also introduce a new second order non-linear recursive polynomials, and then use this polynomials to give a new expression for the summation That is, we prove the following: Theorem 1.Let h be a positive integer.Then, for any integer n ≥ 0, we have the identity where U h,k (x) is a second order non-linear recurrence polynomial defined by U h,h (x) = 1, and U h+1,0 (x) = (h It is clear that our theorem is a generalization of Equation ( 2).If taking y = − 1 2 , n = 0, x = 0 and x = 1 in this theorem, respectively, and noting that U h,0 (1) = 0, E 0 (1) = 1 and E n (1) = −E n for all n ≥ 1, we can deduce the following five corollaries: Corollary 2. For any positive integer h ≥ 1 and real x, we have the identity Corollary 3.For any positive integer h ≥ 1, we have the identity Corollary 4. For any positive integer h ≥ 1, we have the identity From Equation ( 2) with y = − 1 2 and Corollary 3 we can deduce the identities U h,i (0) = C(h, h − i) for all nonnegative integers 0 ≤ i ≤ h. On the other hand, from the definition of U h,k (1), we can easily prove that the sequence are the coefficients of the polynomial Thus, if h = p is an odd prime, then using the elementary number theory methods we deduce the following: Corollary 5. Let p be an odd prime.Then, for any positive integer 2 ≤ k ≤ p − 1, we have the congruence U p,k (1) ≡ 0 mod p. Taking h = p, noting that U p,p (1) = 1, E 1 = − 1 2 and U p,1 (1) = (p − 1)! ≡ −1 mod p, and then combining Corollaries 4 and 5, we have the following: Corollary 6.Let p be an odd prime.Then, we have the congruence This congruence is also recently obtained by Hou and Shen [12] using the different methods. Several Simple Lemmas In this section, we give several necessary lemmas in the proof process of our theorem.First, we have the following: 1−y(e t −1) .Then, for any positive integer h, real numbers x and t, we have the identity where U h,i (x) is defined as in the theorem, and f (h) (t) denotes the h-order derivative of f (t) with respect to variable t. Proof.We can prove this Lemma 1 by mathematical induction.First, from the properties of the derivative, we have That is, Lemma 1 is correct for h = 1. Assuming that Lemma 1 is correct for 1 ≤ h = k, i.e., Then, from Equation ( 3) and the definitions of U k,i (x) and derivative, we have which means Lemma 1 is also correct for h = k + 1.This proves Lemma 1 by mathematical induction. Lemma 2. For any positive integers h and k, we have the power series expansion Proof.For any positive integer k, from Equation ( 1) and the properties of the power series, we have On the other hand, we have Thus, from Equations ( 4) and ( 5) and the multiplicative properties of the power series, we have This proves Lemma 2. Proof of the Theorem In this section, we complete the proof of our theorem.In fact from Equation (1) and Lemmas 1 and 2, we have Comparing the coefficients of the power series in Equation ( 6), we may immediately deduce the identity This completes the proof of our theorem.
1,377.6
2018-12-04T00:00:00.000
[ "Mathematics" ]
The Design of EMG Measurement System for Arm Strength Training Machine , Introduction An arm strength training machine (ASTM) based on an embedded microcontroller system that utilizes a PMSM motor drive to simulate the stack of iron weights has better performance than that of the conventional exercise apparatus presented in [1].Several studies indicate that chronic stroke patients who gained maximal functional benefits from the biofeedback intervention initially had greater active range of motion at all major upper extremity joints [2][3][4].Another study to investigate neuromuscular electrical stimulation initiated by a surface electromyographic biofeedback threshold on knee extension active range of motion (AROM), function, and torque in patients with postoperative arthroscopic knee surgery has been addressed in [5].It concludes that the usage of surface EMG-triggered neuromuscular electrical stimulation can improve the extension AROM.Consequently, the proper utilization of electromyographic biofeedback can lead to substantial improvements among selected chronic stroke patients and can be of considerable functional benefits to others [6,7].Therefore, the usage of EMG not only can help the physical therapy but also can achieve more effective rehabilitation [8,9].Conventional exercise apparatus typically couple a stack of iron weights through a series of pulleys and levels to hand grips [10].The stack of iron weights are usually mounted on guide rods for vertical reciprocal movement from a rest position upwardly against gravity force to an upper position.Lifting weights is accomplished by the user who actuates a bar or another device operably connected to the weights.To vary the force opposing the user, the user is required to change the position of mechanical locking pin and physically add or remove weights from the stack.This results in time consumption and inconvenience when user changes the exercise force level between lifts.These are drawbacks for such an exercise apparatus [11,12].To solve this problem, a closed-loop motor control system that can generate a user opposition force and more particularly simulate a weight stack is presented in [13].Therefore, in order to obtain more effective rehabilitation when manipulating the arm strength training machine (ASTM), an EMG system incorporated with the original functions is designed and implemented in this paper.The EMG measurement mechanism consists of the EMG electrodes: the EMG amplification circuits and the band-pass filter [14,15].In order to obtain a "clean" signal that is without DC offset and high frequency noise, the output signal from the instrumentation amplifier is filtered by a band-pass filter which is formed by a high-pass filter cascaded with a low-pass filter.The clean signal is then rectified to DC value by using the rectifier circuit.The rectified signal is fed into the microcontroller.Accordingly, the signal can be used to display the contraction of muscle group when the user manipulates the ASTM [16].A PMSM (Permanent Magnet Synchronous Motor) drive controlling system based on the microcontroller is also developed in this paper to generate a torque to oppose the user force [17,18].The hardware circuits of the PMSM drive, such as AC/DC rectifier, DC link, DC/AC inverter, EMG sensors, physiological amplifier, high-pass filter, lowpass filter, Hall-effect position sensors, and speed encoder, are well designed, simulated, and implemented.The software programs are written in C language and programmed based on the MPLAB integrated development environment (IDE) tool by Microchip technology Inc.The PMSM motor drive is used to simulate the weight stack that is usually employed to the conventional exercise machines.Thus, the principle of EMG measurement system is firstly derived and described in Section 2. Later, the system hardware and software are designed and realized in subsequent Section 3. Section 4 will present the experimental results of ASTM and EMG measurement system for system verification.Finally, an arm strength training machine with electromyographic EMG measurement system is realized and demonstrated in the Conclusions.The Experimental Results show the feasibility and fidelity of the complete designed system. The EMG Measurement System of ASTM The system hardware of an arm strength machine with EMG measurement system based on PMSM motor drive is shown in Figure 1.It consists of a dsPIC30F4011 microcontroller, EMG measurement system, protection circuit, optical coupling isolation, inverter, current sensor, encoder, and communication interface.The PMSM motor drive is used to simulate the weight stack that is usually employed to the conventional exercise machines.The microcontroller dsPIC30F4011 manufactured by Microchip technology Inc. is the core controller of the ASTM.It is a 16-bit CPU with the capability of digital signal processing.Moreover, it supports many powerful modules such as built-in PWM module, addressable encoder interface module, and input capture module.This makes the design easily complete and thus it shortens the development schedule.The independent power source is employed to supply the gate of MOSFETs.The photocoupler TLP250 is used for electrical isolation between the microcontroller system and the high DC voltage bus voltage as well as the independent power source [19,20]. The motor currents are sensed through the current detection circuit.The magnet pole and rotor position are detected by the Hall-effect sensor and the encoder.In such a way, the speed and rotor position can be calculated and precisely controlled, subsequently.The ACS 712-20 current sensor IC which has the resolution of 100 mV per ampere is adopted for stator phase current detection.Since the microcontroller supports a 10-bit analog and digital converter (ADC), the full scale of 5 V is corresponding to the 1024 bits.In other words, one bit of the ADC represents the 48.83 mA. The biosignal amplification system shown in Figure 2 is used to amplify the electrophysiological signal which is generated from the physiological response inside the body, usually in microvolts level.These signals after amplification are processed for image processing and medical diagnosis.The EMG electrophysiological measurement is one of most popular applications of the biosignal amplification.The electrophysiological signals of muscle contraction and relaxation sensed by EMG sensors are then amplified by the physiological amplifier.From the acquisition of these after processing EMG signals, it can predict the strength of the muscle contraction and the situation of muscle motor so as to facilitate the rehabilitation or the exercise of patients. The EMG measurement mechanism consists of the EMG electrodes, the EMG physiological amplifier, high-pass filter, low-pass filter, and a full-wave rectifier, as shown in Figure 2. Two electrodes of EMG sensor are attached to the surface skin of an arm.A third electrode is attached to the common point for voltage reference.The potential difference is generated when the muscle group contracts and then fed into the instrumentation amplifier for amplification [14].Since the input resistance of an instrumentation amplifier is very high, it is suitable to pick up the EMG signal with high input resistance.In order to obtain a "clean" signal that is not DC offset and high frequency noise, the output signal from the instrumentation amplifier is filtered by a band-pass filter which is formed by a high-pass filter cascaded with a low-pass filter with the lower 3 dB frequency about 100 Hz and upper 3 dB frequency about 500 Hz.The clean signal is then rectified to DC value by using the rectifier circuit.The rectified signal is fed into the microcontroller.After proper processing, it is transmitted to the human interface via communication interface to display the contraction of muscle group when the user manipulates the ASTM.The input physiological signal to the EMG sensor is usually in microvolt.In order to pick up this minute signal, an instrumentation amplifier with high input resistance of INA 2126 that is manufactured by Burr-Brown Corporation is used in this paper.Figure 3 shows the inside of INA 2126 instrumentation amplifier.It is a precision instrumentation amplifier for accurate, low noise differential signal acquisition.The transfer function between output and difference input can be expressed by where 1 = 2 = 10 kΩ and 3 = 4 = 40 kΩ.The resistor is the most properly selected to adjust the voltage gain since the other components need more than two resistances for voltage gain adjustment. is equal to 842 Ω for the voltage gain of 100 in this design.Since the spectrum of EMG biosignal is usually between 50 Hz and 500 Hz.The cutoff frequency is designed to be 100 Hz in this paper.The second order high-pass filter The second order high-pass filter. designed in this paper is shown in Figure 4.The transfer function of high-pass filter can be expressed by where The lower cutoff frequency is The midband voltage gain is Regarding the high-pass filter design, also = 0.16, = 1, 1 = 0.1 F, 1 = 39.3 kΩ, 3 = 2.8 kΩ, and 4 = 25 kΩ; the voltage gain is about 10 and the cutoff frequency is around 100 Hz.The output signal from the high-pass filter is then fed to the second order low-pass filter as shown in Figure 5. Figure 5: The second order low-pass filter. The transfer function of the second order low-pass filter can be expressed as where = 1 + + − (1 + 4 / 3 ), 2 = 1 , and 2 = 1 .The upper 3 dB frequency is and the midband voltage gain is Considering the low-pass filter design, also = 1, = 1, 1 = 0.1 F, 1 = 3.18 kΩ, 3 = 6.3 kΩ, and 4 = 3.18 kΩ; the voltage gain is of 1.5 and the cutoff frequency is around 500 Hz.The voltage gain between the analog output signal of EMG measurement system, V 3 , and the differential input signal of EMG sensor, (V + − V − ), can be expressed as where V is the overall voltage gain of the instrumentation amplifier, high-pass filter, and the low-pass filter.In this case, V is equal to 15000.The equivalent circuit of a PMSM motor is shown in Figure 6.The stator phase voltage equations ( , , ) related to the stator phase currents ( , , ) and back electromotive force ( , , ) for a PMSM motor can be expressed by the following: where , , and , represent the phase resistance for each phase, , , and represent the self-inductance for The equivalent circuit of a PMSM motor. each phase, , , and represent the mutual inductance between either of the two phases, and , , and represent the back EMF for each phase.If a three-phase balanced system is considered, the stator voltage in the following equation can be rearranged to In steady state, the air gap power is expressed in terms of the electromagnetic torque and speed as Hence the electromagnetic torque can be represented as Rearrange ( 12), the electromagnetic torque can be expressed by The load model can be expressed in terms of the motor speed, , a moment of inertia, , in kg-m 2 /sec 2 with a viscous friction , in N-m/rad/sec.The electromagnetic torque, , in N-m then drives the load torque, , in N-m as represented in (13) [20]. The actual force displayed in the human interface panel is obtained from the motor torque.The relationship between developed torque and the total current is in (12).Since the current is captured by current sensors and converted to 10-bit digital signals, the ACS 712-20 current sensor which generates 100 mv for 1 A current flowing is used in this system.If 5 volts' reference voltage is employed for ADC converter, the actual force can be expressed by where actual force is the digital value of actual force displayed in the human interface panel, total is the total motor current, is the torque constant of the PMSM motor and equal to 10.724 kgf-cm, and is the conversion factor, with 48.83 mA per bit in this case and subject to change if different current sensor is used. The System Software Development The system software program is developed under MPLAB IDE software platform and written in C language.Most of the functions of electric bicycle are programmed in the microcontroller firmware which includes the circuit protection mechanism, the ADC converter for EMG system, PWM generation, motor currents calculation, rotor position and speed calculation, and rotor pole position.The system software structure for microcontroller firmware is shown in Figure 7.The initializations for I/O configuration, Timer 1, Timer 2, ADC, and PWM settings are firstly processed in the main program.Most of the ASTM functions are programmed in the microcontroller firmware which includes the circuit protection mechanism, the analog to digital converter for EMG system, PWM generation, motor currents calculation, rotor position and speed calculation, rotor pole position and the transmission and receiving of communication interface as shown in Figure 7.The flowchart of the main program for microcontroller firmware is shown in Figure 8.Since the resolution of encoder is 2500 pulses per revolution.The value of the counter in the microcontroller will be 5000 counts.The motor speed is obtained from the difference between current counter value and the last counter value in which both are acquired from the Timer 2 in capture interrupt service routine as shown in Figure 9.The EMG analog signal input to the microcontroller is first converted to the digital signal via the ADC module embedded in the microcontroller.Accordingly, the human interface can display the same converted EMG signal as well.The human interface design related to the peripheral device can be found in the paper [13].The Capture Counter in Figure 9 represents the distance when motor runs.Consequently, the displacement of a user performing the arm exercise can be obtained.The relationship between the actual speed shown in the human interface display and the motor speed can be expressed as Mathematical Problems in Engineering where actual speed is the speed displayed in human interface with the unit of cm/sec and is the radius of motor shaft with the unit of cm.The communication between the ASTM and human interface is established by UART interface which is fulfilled with the RS232 serial communication standard.The EIA-232 drives/receives of MAX 232 which includes a capacitive voltage generator provides EIA-232 voltages from a 5 V supply.Hence, it can be used for proper voltage levels conversion so that the communication of microcontroller can meet the RS232 standard.The system software for human interface including serial communication program is developed on PC and written in C language.The firmware of ASTM is programming basically on the MPLAB development tool by Microchip technology Inc. as shown in Figure 7.The desired force command and desired speed command are transmitted to the ASTM via the UART communication interface.The programming flowchart is shown in Figure 10.Both the actual force and the actual speed, as well as the EMG signal, are received by the UART controller through the UART communication interface.The data of actual force, actual speed, and the EMG signal are scaled to the corresponding coordinate and saved in EXCEL data format.The EMG signal is sampled every 1 ms.As discussed in Section 2, the signals from EMG sensors firstly go through the EMG amplification system and then feed to the microcontroller via the ADC converter.The real time EMG signals are then converted to the digital values and stored in the ADC buffer.In order to distinguish the real electric signal of the muscle activity from other noise interferences, a threshold voltage about 1 volt is adopted in EMG measurement system.In other words, the digital value is zero if the analog output EMG measurement system is less than 1 volt.This means that the sampled EMG muscle signal is less than 1 volt and is considered as the noise interference.In order to clearly observe the phenomenon of muscle activity while the user manipulates the ASTM, the 100 sampled EMG signals are averaged and denoted as EMG avg .The EMG avg together with real time current, speed are sent to PC human interface for panel display via the RS-233 communication interface.The detailed flowchart for the EMG process is shown in Figure 11.In this paper, the main aim of using the EMG sensors in the ASTM is to observe the muscle activity through the waveform of human interface display panel while user exercises the ASTM.Consequently, the user can increase more interest to do the exercise and more motivation for rehabilitation if physical therapy is required.Furthermore, the physician and physical therapist can understand the therapy progress as well as the practical conditioning of the patient's rehabilitation by observing the phenomenon of muscle activity from the human interface display. Therefore the envelope of the EMG waveform in the human interface display can represent the strength of muscle activity.The larger the envelope (amplitude) of the waveform is, the more strength the user exerts.The relationship between the analog output of EMG measurement system and the amplitude of the envelope for the EMG signal can be expressed by where EMG is the digital output of EMG signal shown in human interface display, 3 is the analog output of EMG measurement system, th is the threshold voltage, bias is the digital output bias, and is the conversion ratio between the digital output of EMG signal and the analog output signal of EMG measurement system.The conversion ratio can be adjusted according to the user's needs.The amplitude of EMG digital output increases as the becomes larger and decreases vice versa.Since a 10-bit analog to digital converter is employed in the system, both EMG and bias have 10-bit data length.The digital output bias is adjusted due to the offset of EMG measurement system. The Experimental Results The prototype of arm strength training machine with EMG measurement system is tested under different load conditions in which they are fulfilled with the dynamometer.The manipulation of ASTM with EMG measurement system is operated via the designed human interface, as shown in Figure 12.Both of the desired force command of 5 kg-cm and desired speed of 10 cm/sec are displayed in the upper window of Figure 12.In order to simulate the practical situation of lifting weights, the actual torque of motor and actual speed The torque of motor (kg-cm) The experiment is repeated by the same cycle.Observing the waveform of Figure 13, it can be seen that the motor develops about 5 kg-cm of torque to counter the force exerted by user.This verifies the system design feasibility.The data displayed in Figures 12 and 13 are firstly saved in the memory and then sketched by using the Microsoft EXCEL software.The EMG measurement system is basically composed of high-pass filter and low-pass filter.Two different frequencies of input signal are applied to high-pass filter for frequency response testing, one is 50 Hz in which it is in stop band and the other is 100 Hz in pass band.The high-pass filter has a voltage gain of 20 dB.The upper and lower traces of Figure 14 represent the waveforms of input and output signals, respectively.It can be seen that the output signal is attenuated to 6.6 dB at 50 Hz.The input signal of frequency at 100 Hz is applied to the high-pass filter as shown in Figure 15.Observing Figure 15, the amplitude of output signal is equal to 0.707 of that of the input signal.In other words, there is a 3 dB difference between the amplitude of input and output signals.The Bode plot of magnitude response for the highpass filter is shown in Figure 16.The same testing method for high-pass filter can be applied to low-pass filter.The designed low-pass filter has voltage gain of 1.5.Figure 17 waveforms of output and input signals at the frequency of 500 Hz.The different frequency of 1 kHz for input signals is also tested.The results are shown in Figure 18.The Bode plot of magnitude response of low-pass filter is shown in Figure 19.From the observation of Bode plot in Figure 19, it can be seen that the magnitude of the output signal for the frequencies above the 500 Hz is reduced; on the contrary, below the 500 Hz it stays flat.The cutoff frequency of 500 Hz is the same as that designed in this paper. In this preliminary test, the experimental results amplitude of EMG envelope which is obtained from the EMG in (16) are depicted in the bottom graph of Figure 12 for bias = 450, = 43, and th = 1 V.The experimental test for this waveform is processing when the user manipulates the ASTM with EMG electrodes attached in the bicep brachii the physician and physical therapist can understand the therapy progress as well as the practical conditioning of the patient's rehabilitation by observing the phenomenon of muscle activity from the human interface display.The practical system configuration of designed ASTM with EMG sensor is shown in Figure 22. Conclusions The establishment of EMG measurement system in ASTM can make the exercise and rehabilitation therapy become more friendly and effective.This paper designs an EMG physiological amplifier system to monitor the muscle activity of the user when manipulating the ASTM.From the experimental results, the system hardware including the microcontroller, protection circuit, optical coupling isolation, three-phase inverter, current sensor, EMG sensors, encoder, and communication interface is well designed.Though only the desired force of 5 kg and desired speed of 10 cm/sec is tested in the experiment, the force up to 15 kg and down to the 0.5 kg have been well-tested in practice for verification of system integrity.The voltage gain of the instrumentation amplifier is designed to be 100.Because the extremely small signal is not easy to generate in the lab, the designed instrumentation amplifier is tested by using circuit simulation. Mathematical Problems in Engineering However, the EMG physiological signal of biceps brachii inputted to the instrumentation amplifier is amplified to appropriate voltage level as the experimental result presented.The relationship between the analog output of EMG measurement system and the amplitude of the envelope for the EMG signal displayed in human interface can be expressed by (16).However, these parameters in (16) are achieved from the preliminary tests of the experimental prototype system under the completion of few lab members.In order to obtain more exact and precise parameters, statistical method will be considered and employed to experimental tests, such as the placement of EMG electrodes and different sex and ages of testing samples, in future work.For testing of highpass filter and low-pass filter, though only two different frequencies are applied to each filter in the experimental results, several frequencies located in stop band and pass band are also tested to complete the Bode plot.Further, the fundamentals of EMG measurement system have been derived, designed, and described in detail.The experimental results have verified the feasibility of each design procedure.Moreover, both of the microcontroller firmware and the user interface are programed and described in detail.Finally, an arm strength training machine with electromyographic sensors for biofeedback is realized and demonstrated in this paper.The experimental results show the feasibility and fidelity of the complete designed system. Figure 1 : Figure 1: The system structure of ASTM with EMG. Figure 2 : Figure 2: The function block diagram of EMG measurement system. Figure 7 : Figure 7: The system software structure of ASTM with EMG measurement system. Figure 8 : Figure 8: The flowchart of main program. Figure 10 : Figure 10: The programming flowchart for communication between human interface and ASTM. (current, speed, and EMG) to human interface V th < EMG ADC ?EMG temp = EMG temp + EMG ADC EMG ADC = 0 EMG AVG = EMG temp /100 Save the data of EMG AVG Clear EMG temp Is EMG temp Figure 11 : Figure 11: The flowchart of EMG process. Figure 12 : Figure 12: The desired force of 5 kg-cm and desired speed of 10 cm/sec. Figure 13 : Figure 13: The developed motor torque for employing 5 kg-cm force. shows the Mathematical Problems in Engineering Ch1 100 mV Ch2 500 mV M 10.0 ms A Figure 14 : Figure 14: The input frequency of 50 Hz applied to the high-pass filter. Figure 15 :Figure 16 : Figure 15: The input frequency of 100 Hz applied to the high-pass filter. Figure 17 : Figure 17: The input frequency of 500 Hz applied to the low-pass filter. Figure 18 : Figure 18: The input frequency of 1 kHz applied to the low-pass filter. Figure 22 : Figure 22: The practical system of ASTM with EMG measurement system.
5,527.2
2015-08-27T00:00:00.000
[ "Computer Science" ]
Update of the flavour-physics constraints in the NMSSM We consider the impact of several flavour-changing observables in the $B$- and the Kaon sectors on the parameter space of the NMSSM, in a minimal flavour violating version of this model. Our purpose consists in updating our previous results in arXiv:0710.3714 and designing an up-to-date flavour test for the public package NMSSMTools. We provide details concerning our implementation of the constraints in a series of brief reviews of the current status of the considered channels. Finally, we present a few consequences of these flavour constraints for the NMSSM, turning to two specific scenarios: one is characteristic of the MSSM-limit and illustrates the workings of charged-Higgs and genuinely supersymmetric contributions to flavour-changing processes; the second focus is a region where a light CP-odd Higgs is present. Strong limits are found whenever an enhancement factor - large $\tan\beta$, light $H^{\pm}$, resonant pseudoscalar - comes into play. Introduction Flavour-changing rare decays and oscillation parameters are known as uncircumventable tests of the Standard Model (SM) and its new-physics extensions. In the quark sector of the SM, flavour-violation is induced by the non-alignment of the Yukawa matrices, resulting in a Cabbibbo-Kobayashi-Maskawa (CKM) mixing matrix, and conveyed only by charged currents at tree-level. While tensions are occasionally reported -see e.g. [1] and references therein for a recent example -, this minimal picture seems globally consistent with the current experimental status of flavour observables [2], such that new sources or new mediators of flavour violation are relevantly constrained by these measurements. Yet, a proper confrontation of a new-physics model to such experimental results does not depend exclusively on the accuracy of the measurements or of the theoretical predictions in the SM, but also on the magnitude of the effects induced beyond the SM (BSM). In this paper, we will consider the well-motivated supersymmetric (SUSY) extension of the SM known as the Next-to-Minimal Supersymmetric Standard Model (NMSSM) -see [3] for a review -, in a minimal flavour-violating version: we will assume that the squark sector is aligned with the mass-states in the quark sector, so that, at tree-level, only charged particles convey flavour-violating effects, which are always proportional to the CKM matrix. Our aims consist in updating our previous work in [4] to the current status of flavour observables and accordingly designing a tool for a test in the flavour-sector which will be attached to the public package NMSSMTools [5]. Beyond ours, several projects for the study of flavour observables in the NMSSM, or more generally in SUSY extensions of the SM, have been presented in the literature: see e.g. [6][7][8][9][10]. Our original work dealing with B-physics in the NMSSM [4] discussed the processes BR(B → X s γ), BR(B 0 s → µ + µ − ), BR(B + → τ + ν τ ) as well as the oscillation parameters ∆M d,s . These processes had been implemented in the Fortran code bsg.f at (grossly) leading order (LO) in terms of the BSM contributions, using the NLO formalism for the SM and locally correcting it to account for NNLO effects in an ad-hoc fashion: in other words, this analysis essentially compiled results of the late 90's / early 2000's [11][12][13][14][15][16][17][18]. In doing so, it ignored existing NLO results in the MSSM [19][20][21], focussed instead, at the loop level, on tan β-enhanced Higgs-penguin contributions [15,18] and only caught the early developments of the NNLO calculation in the SM [22,23]. The SM analysis of BR(B → X s γ) at NNLO has been recently updated in [24,25]: the corresponding results account for significant progress since [22] and shift the SM expectation ∼ 1σ upwards, very close to the experimental measurement. Similarly, BR(B 0 s → µ + µ − ) has been considered up to three-loop order in the SM [26][27][28], shifting the result upwards with respect to the LO. Moreover, LHCb and CMS now provide an actual measurement of this process [29], which tightens the associated constraint significantly with respect to the previous upper limits. The SM status ofB → X s l + l − has also received some attention lately [30]. Finally, several other channels -e.g. B + → D ( * ) τ + ν τ , the b → sνν or the s → dνν transitions -have been suggested as complementary probes of new physics. In addition to these recent developments concerning the SM and experimental status of flavour processes, we note that, as the NLO contributions in supersymmetric extensions of the SM can be extracted from e.g. [19][20][21], it is scientifically sound to include them into our implementation of the observables, so as to reduce the associated uncertainty in the test. This is particularly true in the case of BR(B 0 s → µ + µ − ), since this process is now measured and no longer simply bounded two orders of magnitude from above. The substancial shift in the SM estimate for BR(B → X s γ) also tightens the constraint on BSM effects, so that enhanced precision is relevant. Our purpose in this paper consists in describing the new implementations of flavour observables within NMSSMTools. We will first remind succintly of the formalism employed to account for modified Higgs couplings at large tan β. We will then review briefly each observable and refer explicitly to the literature that we use in their implementation: in a first step, we shall focus on processes in the B sector before turning to Kaon physics. Finally, we will illustrate the workings of the new flavour constraints on the parameter space of the NMSSM, comparing the results of our new implementation with the former ones in a few scenarios, and discussing the relevance of the new observables which have been included. tan β-Enhanced corrections to the Higgs-quark couplings The Higgs sector of the NMSSM consists of two doublets H u = (H + u , H 0 u ) T and H d = (H 0 d , H − d ) T , as well as a singlet S. As in the MSSM, the tree-level couplings to quarks involve H u and H d in a Type-II 2-Higgs-Doublet-Model (2HDM) fashion: where the diagonal Yukawa parameters can be written in terms of the tree level quark masses and the Higgs vacuum expectation values (v.e.v.'s) -defined as Yet, radiative corrections, particularly those driven by the SUSY sector, spoil this Type-II picture and generate effective terms such as -in the SU (2) × U (1)-conserving approximation: While in principle a higher-order concern, such terms may be enhanced for large values of tan β, so that a resummation becomes necessary for a consistent evaluation. Here, as in our original work, [18] remains our main guide. This paper shows that the corrections to the Higgs-quark couplings driven by supersymmetric loops are well approximated in an effective SU (2)×U (1)conserving theory. Corrections to the down-Yukawa couplings and the associated Higgs-quark vertices are dominated by the loop-induced and tan β-enhanced contributions to the H † u q L d c R operator, which, in turn, can be encoded as the corrections to the down-type quark mass-matrix ∆m Corrections to the up-Yukawa are somewhat more subtle, as tan β-enhanced terms do not appear at the level of the quark masses, but only at that of some -e.g. charged -Higgs couplings. There, [18] shows that the parametrization of [15], complemented by additional corrections, gives competitive numerical results. We will be working within these approximations. In this framework, all the tan β-enhanced Higgs-quark vertices can be encoded in terms of the 'apparent' quark massesm f q and CKM matrix elements V ef f f f , as well as a bunch of 'ε-parameters' which parametrize: • corrections to the down-type masses: Such diagonal contributions are mediated by gluino-sdown 1 / neutralino-sdown or chargino-sup loops and can be extracted from Eqs.(2.5) and (A.2) in [18]. • off-diagonal corrections to the down-type mass matrix: These, in our minimal flavour violation approximation, are exclusively mediated by chargino-sup loops. Eqs.(2.5) and (A.2) of [18] again provide explicit expressions in terms of the supersymmetric spectrum. • for the up-type couplings, ε u (d) is defined as the effective correction to the H + d u c R d L vertex (see [15]): It is computed according to Eqs.(5.6)-(5.8) of [18], i.e. including relevant electroweak-gauge effects. • corrections to the CKM matrix elements can be encoded in terms ofε The relevant flavour-changing Higgs-quark couplings are then given in Eqs.(3.55)-(3.61) and (5.8) of [18]. We note that, in this approach, the couplings of the Goldstone bosons expressed in terms of the effective quark masses and CKM elements are formally identical to the tree-level vertices expressed in terms of the tree-level masses and CKM matrix, so that the Goldstone bosons do not convey explicit tan β-enhanced terms. Another remark addresses the explicit calculation of the ε-parameters: we neglect the Yukawa couplings of the two first generations and assume degeneracy of the corresponding sfermions. Consequently, the unitarity of the CKM matrix can be invoked in order to include the contributions of both generations at once as, e.g., 3 Observables in the B-sector As mentioned earlier, the status of B → X s γ in the SM has substancially evolved since the analyses of [22,23]. The new NNLO SM estimate for E 0 = 1.6 GeV [24,25] (where E 0 is the cut on the photon energy), shifted ∼ 1σ upwards with respect to the older estimate, is indeed very close to the experimental measurement [31] (combining results from CLEO, Belle and BABAR): Trying to account for this result by tuning the c-quark mass in the NLO formalism, as we did in [4], potentially opens new sources of uncertainty. On the other hand, employing the full NNLO formalism is an effort-consumming task of limited interest (in our position), considering that BSM effects will be included at NLO only (see below). We therefore settled for a 'middle-way', using the NNLO formalism but encoding the pure SM NNLO effects within free parameters which are numerically evaluated by comparison with the numbers provided in [24,25]. • P (E 0 ) encodes the perturbative contribution in the ∆B = 1 OPE in terms of the Wilson coefficients C eff i at the low-energy scale µ b 2 GeV. From Eqs.(2.10)-(3.11) in [22]: Beyond the Wilson coefficients at LO C ij play a central role in this expression (for simplicity of notations, we factor out 2 in the case of K (1) ii ). They can be extracted from Eqs.(3.2)-(3.13) of [22] as well as Eqs.(3.1) and (6.3) of [32] and convey the NLO corrections to the partonic process b → sγ as well as the associated Bremsstrahlung contributions. All the Φ ij (δ, z) functions entering K (1) ij are fitted numerically. The only real difficulty lies in incorporating the third line of Eq. 5, which contains the NNLO Wilson coefficient C (2)eff 7 (µ b ) and the NNLO coefficients K (2) ij . However, if we confine to the NLO for BSM contributions, we see that these missing quantities originate purely from the SM and may be parametrized as: P • N (E 0 ) stands for non-perturbative corrections. From Eqs.(3.9) and (3.14) of [33]: where one can use the input from Appendix D -Eq.(D.1) -of [25], with the dictionary: λ 1 = −µ 2 π ; λ 2 = µ 2 G /3; ρ 1 = ρ 2 D ; ρ 2 = ρ 3 LS /3. This procedure is aimed at parametrizing phenomenologically the non-perturbative effects, the parameters being determined in a fit of the semi-leptonic B decays. [24,25] then invoke [34] to estimate the irreducible uncertainties. • We come to the Wilson coefficients at the low scale. These are connected to the Wilson coefficients at the high scale via the Renormalization Group Equations (RGE). For the LO coefficients, the solution to the RGE's -provided C (0)eff i (µ 0 ) = δ i2 , for i = 1, · · · , 6 -can be found in Appendix E of [16]. Alternatively, one may directly use [35], which allows to derive the NLO coefficients as well: with the 'magic numbers' a i , m kl,i , m kl,i and m (11) kl,i of Tables 1, 3, 4 in the cited reference. Finally, the QED coefficient can be obtained from Eqs. (27), (85) and (86) of [17] -multiplied with a factor αem 4π −1 -and proceeds originally from the study in Ref. [36]. Having sketchily described the general formalism in the previous lines, we are left with the sole remaining task of defining the Wilson coefficients at the matching scale. From the discussion above, it should be clear that, at the considered order, we need only the C (0)eff k (µ 0 ) and C (1)eff k (µ 0 ). Still, this improves on the treatment in [4] where only 2HDM effects were included at NLO. • LO contributions from chargino / stop loops were given in [14] -Eqs.(4)-(7) -but the NLO effects in Eqs.(9)-(27) (of the same reference) are not straightforward. Instead, we prefer to use [19,21]. In order to avoid the explicit ln µ0 mT in the NLO coefficient, we take care of defining the LO coefficients at the stop (or scharm/sup) scale directly, then running it down to µ 0 via the RGE's -and taking into account the flavour-dependence in the running, i.e. the anomalous dimensions 14 23 and 16 23 for five flavours become 14 21 and 16 21 for six flavours. • Finally, for tan β-enhanced two-loop effects at the level of the Higgs-quark couplings, we no longer follow [15], Eqs. (18)-(19) -which are phrased in terms of the tree-level, and not of the apparent, parameters -, but Eqs.(6.51) and (6.53) of [18], the former amounting to 0 for the G ± contribution in the SU (2) × U (1)-conserving limit. As before, effective neutral Higgs / bottom quark flavourchanging loops are included -see Eq.(6.61) in [18]. At this point, the implementation at SM NNLO + BSM NLO is almost complete. The only remaining task consists in identifying the NNLO coefficients P For this, we take good care of employing the input parameters described in Appendix D of [25] and turning off the BSM contributions. To recover the branching ratio in [24] -see Eq.(6) of this reference -, we determine a correction P (2) 2 (SM ) of the order of 5% of the total P (E 0 ). Then, linearizing Eq. 5 in terms of LO BSM coefficients at the matching scale, we find that our implementation should be supplemented with coefficients Q (2) 7,8 (SM ) at the permil level in order to recover the numbers appearing in Eq.(10) of [24]. These numbers are of the expected order of magnitude. Let us finally comment on the error estimate. The SM + CKM + Non-Perturbative uncertainties have been combined in quadrature in Eq.(6) of [24] and we simply double the resulting number 0.23 in order to obtain 2σ bounds. On top of this SM + CKM + Non-Perturbative error, we add linearly a higher-order uncertainty of 10% on the LO and 30% on the NLO new-physics contributions, each type -namely 2HDM, SUSY, neutral Higgs -being added separately in absolute value. To incorporate this uncertainty, we simply use the linearization which has been employed to determine the NNLO parameters just before. Now let us turn to was originally considered in [17] at NLO and then, in view of the BABAR measurement [37], by [38]. Finally, [24] extended the analysis to NNLO. Beyond the trivial substitution s → d in CKM matrix elements, the chief difference with BR[B → X s γ] originates in sizable contributions from the partonic process b → dūuγ -since the CKM ratio td V tb is not negligible. The latter can be sampled in several ways -see e.g. [39] -, which provides some handle on the associated error estimate. We will be content with the evaluation using constituent quark masses given in Eq.(3.1) of [39], setting the ratio m m b -with m standing for the mass of the light quarks -in such a way as to recover, in the SM limit, the central value of [24], Eq.(8): We can then check the consistency with Eq.(10) of [24] for the new physics contributions. As before, the SM + CKM + Non-Perturbative uncertainties are taken over from [24], Eq.(8) -again we double the error bands to test the observable at the 2σ level -and we add linearly the new physics uncertainties. On the experimental side, the BABAR measurement [37] has to be extrapolated to the test region, leading to the estimate [38]: is the observable where the evolution since [4] has been the most critical. The experimental status has seen the upper bound BR[B 0 s → µ + µ − ] exp. < 5.8 · 10 −8 (95% CL) replaced by an actual measurement at LHCb and CMS [29,31]: The corresponding value agrees well with the recent SM calculation [26]: It is thus no longer sufficient to consider tan β-enhanced effects only, and we therefore design a full test at NLO in the new version of bsg.f. The general formalism remains unchanged and the master formula can be recovered e.g. using Eqs.(5.15)-(5.16) of [20]: As before, we shall neglect effects from the 'mirror operators' -which are suppressed as m s /m b -and focus on the leading coefficients c A (pseudovector), c S (scalar) and c P (pseudoscalar) of the (bs)(μµ) system. The analysis is simplified by the fact that -provided the corresponding operators have been suitably normalized -these semi-leptonic coefficients have a trivial running. • The SM contribution to BR[B 0 s → µ + µ − ] is known up to three-loop QCD [27] and leading QED order [28]. It projects on the pseudovector operator exclusively. We shall use the numerical parametrization of [26], Eq.(4), to account for it. • Additional 2HDM contributions appear in the form of Z-penguins, boxes and neutral-Higgs penguins. [20] provides the corresponding input in Eqs. [20]. Instead of using Eq.(3.50)-(3.58) of that same reference for the neutral-Higgs penguins, we resort to [18], Eqs. (6.35) and (6.36). As in [4] -see also [40] -, we replace the squared Higgs mass in the denominator by a Breit-Wigner function, so as to account for potentially light Higgs states. A similar analysis can be conducted for The experimental measurement [31]: combines the LHCb and CMS limits. The formalism is the same as for the B 0 s decay up to the trivial replacement s → d. In practice, we use the quantities m B d = 5.27958 [41], τ B d = (1.520 ± 0.004) ps [31], f B d = (188.5 ± 5.25) MeV -again an ad-hoc combination of the various results presented in [42] -and |V tb V * td | = (8.6 ± 2.8) · 10 −3 [43]. Due to larger uncertainties, one expects milder limits than in the B 0 s case however. The b → sl + l − transition The processB → X s l + l − was not considered in [4] but had been added in bsg.f later, including only the scalar contributions from tan β-enhanced Higgs penguins. Here we aim at a more complete analysis. The study in [30] provides a recent overview of the observables which can be extracted fromB → X s l + l − . We will confine to the branching fractions in the low -[1, 6] GeV 2 -and high -≥ 14.4 GeV 2m 2 l + l − ranges. Eqs.(B.33) and (B.36)-(B.38) of the considered paper provide the dependence of these rates on new-physics contributions to the (chromo-)magnetic operators as well as the semi-leptonic operators of the vector type. The sole SM evaluation can be extracted from Eqs.(5.13)-(5.15) of [30], while the prefactor in Eq.(4.6) of [30] can be evaluated separately to allow for a different choice of the central values of CKM / non-perturbative contributions: we choose to take the latter from [25] since the normalization coincides with that of BR[B → X s γ]. The computation of the Wilson coefficients for the (chromo)-magnetic operators -C eff 7 , C eff 8 -have already been described in connection withB → X s γ: we simply run these coefficients down to the matching scale of [30], µ 0 = 120 GeV. Moreover, C 10 coincides with c A -discussed in the context of B 0 s → µ + µ − -up to a normalization factor. Only C 9 is thus missing: it can be obtained in [21] -see Eq.(3.6) and Appendix A of this reference. Although the lepton flavour has very little impact on C 9,10 -it intervenes only via the lepton Yukawa couplings in subleading terms -, we still distinguish among C e 9,10 and C µ 9,10 . While this is ignored by [30],B → X s l + l − could also be mediated by scalar operators as shown in Eq.(2.5) of [44] -note that the coefficients C Q1,2 there coincide, up to a normalization factor, with c S,P introduced before. Therefore, we add these contributions accordingly, estimating the integrals over m 2 l + l − numerically. To account for possibly light Higgs states, the Higgs-penguin contributions from SUSY loops are isolated in c S,P and receive denominators of the form m 2 which are then integrated. Note that the scalar coefficients c S,P depend linearly on the lepton mass, so that they matter only in the case of the muonic final state. Finally, we come to the error estimate: the SM uncertainties (including e.g. CKM effects) are extracted from Eqs.(5.13)-(5.15) of [30]. Linearizing Eqs.(B.33) and (B.36)-(B.38) (of that same reference) in terms of C BSM 7,··· ,10 , we associate a 10% uncertainty to these new-physics contributions and add it linearly. As the contributions from scalar operators is added 'by hand', we use a larger uncertainty of 30%. The experimental values relevant for theB → X s l + l − transition are extracted from Eqs.(1.1) and (1.2) in [30]. The normalized FB asymmetryĀ F B [B → X s l + l − ] could also be implemented using the results in [30]: (14) with the quantities H A , H T and H L explicited in Appendix B of [30]. Note that the contributions from scalar operators are suppressed as m l m b 2 [44] and may thus be neglected. However, the only experimental source (Belle) [45] chose a different binning, so that the results cannot be compared. Beyond the inclusive decay rates, much effort has been mobilized in the study of the B → K ( * ) l + l − exclusive modes in the last few years. The full angular analysis of these modes provide two dozen independent observables [46]. Tensions with the SM estimates have been reported in some of these channels, however, leading to a substancial literature (see e.g. [1,47,48]). In this context, we choose to disregard these exclusive modes for the time being, waiting for a clearer understanding of the reported anomalies. The b → sνν transition The b → sνν transition is known to provide theoretically clean channels. While ignored in our original work, we decide to include the three following observables in the new version of the code: We follow the analysis of [49] (section 5.9), updated in [50]. The Wilson coefficients are provided at NLO in section 3.2 of [20]. Under our assumption of minimal flavour violation, with no flavour-changing gluinos or neutralinos, and neglecting the masses of the light quarks, only the coefficient C L (or X L in the notations of [49]) receives contributions in the model. The relation between the branching ratios in the NMSSM and that in the SM thus becomes particularly simple: see Eqs.(229-232) of [49]. We employ the updated SM evaluations in Eqs. (10), (11) and (23) of [50]: We also note that the ratio of the B + / B 0 lifetimes controls that of the The experimental upper bound on the inclusive branching ratio BR[B → X s νν] originates from ALEPH [51]; those on the exclusive modes BR[B → Kνν] and BR[B → K * νν] are controlled by BABAR [52] and BELLE [53] respectively (see also the compilation in [31]). At 90% CL: Generalizing to the b → dνν transition is trivial, though not competitive at the moment. Flavour transitions via a charged current The central observable in the b → u transition is BR[B + → τ + ν τ ]. Here, we perform little modification of the original implementation in [4]. In other words, we follow [54], where the effects of the W and charged-Higgs exchanges at tree-level, corrected by tan β-enhanced supersymmetric loops, appear in Eq.(5-7) of the quoted paper. The uncertainty is assumed dominated by V ub [41] and the hadronic form factor [42]. The b → c transition has focussed some attention in the last few years. We will consider the ratios R D ≡ BR(B + →Dτ + ντ ) BR(B + →Dl + ν l ) and R D * ≡ BR(B + →D * τ + ντ ) BR(B + →D * l + ν l ) . These quantities show a tension between the SM predictions from lattice / HQET R SM D = 0.297±0.017, R SM D * = 0.252 ± 0.003 [55][56][57] or, more recently, R SM D = 0.299 ± 0.011 [58], R SM D = 0.300 ± 0.008 [59] and the experimental averages R exp D = 0.391 ± 0.050, R exp D * = 0.322 ± 0.022 [31] (HFAG website), which combine results from BABAR [57], LHCb [60] and Belle [61]. Note that these tensions in the b → c transition are independent from the CKM uncertainty on V cb , due to the normalization. These contributions are mediated by a charged Higgs and we can easily translate, for the charged-Higgs / quark couplings, the notations of [62] to ours (see section 2). We find the following Wilson coefficients: We assume a 30% uncertainty on these new-physics coefficients, which we add linearly to the SM uncertainty quoted above. Since these observables are only marginally compatible with the SM prediction, we do not devise an actual test for them, but simply propose an evaluation. B 0 d,s oscillation parameters ∆M d,s The old version of the code used the formalism of [18] to encode the SM -see Eq.(6.7) of that workas well as the tan β-enhanced double-penguin contributions -Eq. • We follow section 3.1 of [64] to connect the matching scale to the low-energy matrix elements -5-flavour running. • The low-energy physics is described by the so-called 'Bag' parameters -matrix elements of the operators. We rely essentially on the lattice calculations of [65], except for the operator Q V LL , which receives the SM contributions and has thus attracted more recent attention. In this later case, we use the current FLAG average [42] forB B d,s -which coincides with the Bag parameter up to a rescaling. These ingredients allow to derive a prediction for ∆M B d,s using the master formula of [18], Eqs.(6.6)-(6.8). The hadronic form factors f B s,d are taken from [42], where we combine the various results. For the CKM elements, we continue to rely on the evaluation from tree-level processes proposed in [43]. Note that our central value for ∆M s in the SM limit is somewhat higher than the latest estimates [66,67]. This is essentially due to the choice for the lattice input: [66,67] have their own averaging, leading to a smaller form factor, while we follow [42]. We come to the error estimate. The uncertainty associated to the SM contributions to the operator Q V LL is often neglected in the literature. Eq.(11) of [66] shows however that there could be an error of at least a few permil. We therefore associate a 1% uncertainty to this contribution, which we add linearly to a 30% uncertainty on each type -charged Higgs Box / SUSY Box / double penguin -of new-physics contribution. Then, the uncertainties on the Bag parameters are taken from [42,65] and combined in quadrature at the 2σ level. Finally, we factor out the uncertainties on the CKM [43] and lattice form factor [42], adding them in quadrature at the 2σ level. Note that the CKM uncertainty dominates the total error on ∆M d (at the level of 60%) and is actually of the order of magnitude of the central value, so that it is important not to linearize the associated error. The s → dνν transition The physics of Kaons also provides limits on new physics, one example being the s → dνν transition. We again follow [49] (section 5.8), together with the updated SM results of [68]. The Wilson coefficients mediating the transition [69] are very similar to those intervening in the b → sνν transition, except that the interplay of CKM matrix elements is formally different. The normalization to V td V * ts , instead of V tb V * ts , gives more weight to the two first generations: for completion, we thus incorporate the effects proportional to the charm Yukawa coupling. Note that we continue to neglect the quark masses of the first generation and that tree-level neutralino and gluino couplings do not mediate flavour transitions (per assumption), so that only the X L (in the notations of [49] and equivalent to the C L of section 3.4) coefficient is relevant. The decay of a charged kaon to π + νν, as well as that of the neutral K L to π 0 νν, can then be encoded in terms of this Wilson coefficient: see Eqs.(213-214) of [49], where, however, we substitute the more recent SM input of Eqs.(2.2), (2.9), (2.11) of [68]. K −K mixing As for the B mesons, one can consider the mixing of K andK mesons. Associated quantities are very precisely measured experimentally [41]: where ∆M K stands for the mass-difference between K L and K S and ε K measures indirect CP-violation in the K −K system. However, the theoretical estimates of these quantities of the K −K system suffer from a substancial uncertainty associated to long-distance effects. Several estimates based on representations of large N QCD had been proposed in the 80 ies -see e.g. [72]. Lately, lattice collaborations have been emphasizing the possibility to perform an evaluation in a realistic kinematical configuration in the near future: see e.g. [73]. We will follow [74] (see also discussion and literature therein) in estimating the long-distance contribution to ∆M K at (20 ± 10)% of the experimental value, while [75] provides some lattice input for ε K : we take over the quantity ξ 0 from Eq.(74) and the error estimate on ξ LD , Eq.(75) (of that reference) -these values were originally computed in [76] and [77], and Eq.(67) of [75] explicits their impact on ε K . We now turn to the short-distance contributions in the K −K system. The discussion is very similar to the case of the B −B mixing in section 3.6. We follow [78] for the SM part: this paper performed a NNLO evaluation of the charm contribution -see Eq. (15) in that work -, completing earlier results for the mixed charm-top [79] and top [80] contributions. Note that [74] and [75] also propose recent evaluations of the quantity η cc , with slightly lower central value, but we choose to stick to the conservative estimate of [78]. The master formulae for ∆M K and ε K are provided in Eqs. (18) and (16) of this reference, respectively -see also Eqs.(XVIII.6-9) of [11], as well as Eqs.(XII. [3][4][5] for the expression of the functions S 0 . The kaon mass of 0.4976 GeV and the form factor f K = 0.1563 ± 0.0009 GeV are taken from [41] and [42] respectively. The inclusion of BSM contributions follows the general NLO formalism of [64]: see Eqs.(7.24-7.32). Eqs.(3.20-3.38) (of this reference) explicit the running between the matching-and the low-energy scales. However, we will be using more recent 'Bag parameters' for a low-energy scale of 3 GeV: Table XIII of [81] compiles several recent lattice calculations, which we put to use. As in the B −B case, the Wilson coefficients account for Higgs / quark and chargino / squark box diagrams as well as Higgs doublepenguin contributions and we follow appendix A of [18]. Yet, we also include effects associated to the charm Yukawa, as the interplay of CKM elements gives more weight to such terms than for the B −B mixing. Finally, we use appendix C of [64] to run each new-physics contribution from the relevant BSM scale (charged-Higgs or squark mass) down to the matching scale of 166 GeV. The SM uncertainty, where the uncertainty on the η cc parameter and the long distance effects dominate, is added linearly to the uncertainty driven by the bag parameters and a 30% error on higher-order contributions to the BSM Wilson coefficients. Note that one can lead a similar analysis for the D −D mixing, which we do not consider here, however. Sampling the impact of the flavour constraints Based on the discussion of the previous sections, we design two Fortran subroutines bsg.f and Kphys.f for the evaluation in the NMSSM of the considered observables in the B-and the Kaon-sectors (respectively), as well as a confrontation to experimental results. These subroutines are then attached to the public tool NMSSMTools [5]. Comparing the new and the old codes In order to test the differences between the new and the former implementations of B-physics observables in bsg.f, we perform a scan over the plane defined by m H ± -the charged Higgs mass -and tan β and display the exclusion contours associated with flavour constraints in Figs. 1 and 2. The chosen region in the NMSSM parameter space corresponds to the MSSM-limit, with degenerate sfermions and hierarchical neutralinos. Note that we disregard the phenomenological limits from other sectors (e.g. Higgs physics, Dark matter, etc.). We consider a large value of the trilinear stop coupling |A t | = 2.5 TeV, which is known to enhance effects driven by supersymmetric loops, and study separately the two opposite signs -a negative value of A t , when µ > 0, typically triggers destructive interferences among the SUSY and 2HDM contributions toB → X s γ. • The limits fromB → X s γ are more severe in the new version, which is mostly apparent in Fig. 1 (A t > 0): this is not unexpected since the larger SM central value -closer to the experimental measurement -correspondingly disfavours new physics effects which interfere constructively with the SM contribution (2HDM effects or supersymmetric loops for µ, A t > 0). Consequently, the areas with a light charged Higgs or large tan β receive excessive BSM contributions in view of the experimental measurement and are thus disfavoured. Moreover, note that the full NLO implementation reduces somewhat the error bar associated to higher-order new-physics contributions, which also results in tighter bounds for the more recent code. For A t < 0, one observes two separate exclusion regions: for low values of tan β and m H ± , the 2HDM contribution is large (excessive) while the negative SUSY effect is too small to balance it; on the contrary, with large tan β and heavy H ± , the SUSY contribution dominates and is responsible for the mismatch with the experimental measurement. In between, the destructive interplay between the SM and 2HDM effects on one side and the SUSY loops on the other succeeds in keeping BR[B → X s γ] within phenomenologically acceptable values. • Limits from B 0 s → µ + µ − used to be little sensitive to the sign of A t in the older implementation. This is no longer true, the reason being that the scalar coefficients c S,P receive new contributions, which (had been neglected in the previous version of the code and) may interfere constructively or destructively with the Higgs-penguin effects. This channel appears as the most sensitive one, together withB → X s γ, in the considered scenario. Given the shape of the exclusion regions driven byB → X s γ, however, B 0 s → µ + µ − seems most relevant for A t < 0 (Fig. 2). Expectedly, the limits are tighter for large tan β, where SUSY contributions are enhanced. • Limits fromB → X s l + l − differ more significantly between the two implementations -although they remain subleading. In particular, an excluded region appears at low tan β: it is largely driven by the 2HDM contributions to the semi-leptonic vector coefficients C 9,10 -which indeed involve terms in tan −1 β. On the other hands, the exclusion region at low m H ± is largely unchanged: it is associated with the enhancement of the Higgs-penguin contributions for a light Higgs sector. • Despite the corrections to the tan β-enhanced Higgs/quark vertices, the constraints from B + → τ + ν τ , ∆M s and ∆M d are little affected by the modernization of the code and remain subleading. We observe thatB → X s γ and B 0 s → µ + µ − intervene as the determining limits from the flavour sector in the considered scenario: they exclude all the region beyond tan β > ∼ 20. The low m H ± -region is in tension with most of the observables in the B-sector (unsurprisingly), thoughB → X s γ and B 0 s → µ + µ − again appear as the limiting factors at low-to-moderate tan β. Interestingly,B → X s l + l − seems to offer a competitive test for tan β < ∼ 2. We perform a second test in a region involving a light CP-odd Higgs state with mass below 15 GeV -still presuming nothing of the limits from other sectors: note that this is a phenomenologically viable scenario in the NMSSM, although the limits on unconventional decays of the SM-like Higgs state at ∼ 125 GeV place severe constraints on the properties of the light pseudoscalar. The results are displayed in Fig. 3 -in terms of the mass of the pseudoscalar m A1 and tan β -and confirm the trends that we signaled before: • Limits fromB → X s γ intervene here at low tan β -where the supersymmetric contributions cannot balance the effect triggered by the charged-Higgs (note that A t < 0). A few points are also excluded for low m A1 and large tan β: these result from the two-loop effect mediated by a neutral Higgs. They prove subleading in the considered region. • Limits from B 0 s → µ + µ − appear somewhat tighter in the new implementation. In particular, a narrow corridor where the new physics effects reverse the SM contribution is visible in the plot on the bottom of Fig. 3 (which corresponds to the older implementation of the limits) -from (m A1 ∼ 6, tan β ∼ 2) to (m A1 ∼ 15, tan β ∼ 5); this region is no longer accessible with the more recent code (it is, in fact, shifted to lower values of tan β). This channel is the main flavour limit in the considered region, due to the large contribution mediated by an almost on-shell Higgs penguin. • Limits fromB → X s l + l − intervene in two fashions. One is the exclusion driven by an almostresonant pseudoscalar and the associated bounds are essentially unchanged with respect to the older implementation. Additionally, a new excluded area appears at low tan β. • Limits from ∆M d,s are qualitatively unchanged among the two versions, though the bounds associated with ∆M d seem somewhat more conservative in the new implementation. These constraints remain subleading however, in view of the more efficient BR(B 0 s → µ + µ − ), and confine to the Above, the results with the new code; below, the results of the old version. The colour code remains the same as before. The symbols are also unchanged, except forB → X s l + l − , with horizontal lines, and, B → X s γ, with circles (for reasons of lisibility). resonant regime -note e.g. the allowed 'corridor' where new-physics contributions reverse the SM effect -or the very-low range tan β < ∼ 1. B 0 s → µ + µ − thus appears as the constraint which is most sensitive to the enhancement-effect related to a near-resonant pseudoscalar. The exclusion effects are most severe for larger tan β as the Higgspenguin is correspondingly enhanced. For tan β < ∼ 2,B → X s l + l − proves a sensitive probe in its new implementation. Note that, in the two scenarios that we discussed here, the precise limits on the {m H ± , tan β} or {tan β, m A1 } planes of course depend on the details of the parameters. In particular, the large value of |A t | triggers enhanced SUSY effects, resulting in severe bounds on the considered planes. We thus warn the reader against over-interpreting the impression that only corners of the parameter space of the NMSSM are in a position to satisfy B-constraints at 95% CL, as Figs. 1, 2 and 3 might lead one to believe. To counteract this effect, we present in Fig. 4 the limits from flavour processes obtained with the new implementation, for A t = 500 GeV and a somewhat heavier chargino / neutralino sector. The plot on the top again considers the plane {m H ± , tan β} in the MSSM limit. SUSY contributions are suppressed by the choice of low A t . Correspondingly, limits fromB → X s γ only intervene in the region with low m H ± < 300 GeV. The constraints driven by B 0 s → µ + µ − eventually exclude the large tan β > ∼ 45 range but are obviously weaker than before. On the other hand, the exclusion contour associated with B + → τ + ν τ andB → X s l + l − remain largely unaffected. The plot on the bottom part of Fig. 4 addresses the scenario with a light pseudoscalar: contrarily to the case of Fig. 3, CP-odd masses above m A1 ∼ 6 GeV are left unconstrained by the flavour test, with exclusions intervening only at very-low tan β or for m A1 in the immediate vicinity of a resonant energy (for B 0 s → µ + µ − , ∆M s orB → X s l + l − ). Impact of the new flavour tests Beyond the observables which had been considered in [4], we have extended our analysis to several new channels. We now wish to discuss their impact on the NMSSM parameter space. In Fig. 5, we consider the scenario of Fig. 1 once more and present the exclusion limits driven by the newly implemented channels. Note that the constraints considered in the previous section form the black Figure 5: Exclusion contours driven byB → X d γ (red, horizontal lines), B 0 d → µ + µ − (light green, vertical lines), B → X s /Kνν (dark green, circles), the K −K mixing (yellow, diamonds) and K → πνν (orange, triangle) in the plane {m H ± , tan β} for the scenario of Fig. 1. The limits obtained with the observables considered in Fig. 1 are shown on the background in black (crosses). The case of A t > 0 is depicted on the top, while A t < 0 is on the bottom. exclusion zone on the background. The limits from the various channels shown in this plane seem to be essentially subleading in view of these previous constraints of Fig. 1. Yet the corresponding bounds are superseded byB → X s γ. • The processes of the b → sνν and s → dνν transitions are found to be well under the current experimental upper bounds. • The K −K mixing excludes a few points (driven by ε K where the SM is slightly off, with respect to the experimental results) but is not competitive in view of the, admittedly conservative, uncertainties. Note that the limits induced by the b → cτ ν τ channels have been omitted in Fig. 5. Given the current data, this transition would exclude the whole {m H ± , tan β} plane, with the exception of the large tan β / low m H ± corner -which is excluded by most of the other flavour constraints: the significant discrepancy of the SM estimate with the experimental measurement, especially for B → D * τ ν τ , explains this broad exclusion range. SUSY 2HDM effects cannot reduce the gap much, except in already excluded regions of the parameter space. Figure 6: Exclusion contours driven byB → X d γ, B 0 d → µ + µ − , B → X s /Kνν, the K −K mixing and K → πνν in the plane {tan β, m A1 } for the scenario of Fig. 3. The limits obtained with the observables considered in Fig. 3 are shown on the background in black. We employ the same colour / symbol code as for Fig. 5. Then, we return to the light pseudoscalar scenario of Fig. 3 and display the constraints associated with the new channels in Fig. 6. Again, these limits are found to be weaker than those shown in the previous section. Limits from B 0 d → µ + µ − prove the most constraining of the new channels in this regime: this again results from the enhancement of the Higgs-penguin mediated by a resonant A 1 . Subleading constraints from the K −K mixing also intervene at low tan β < ∼ 1 and for very light CP-odd Higgs with m A1 < ∼ 2 GeV. Again, the discrepancy among SM predictions and experimental measurements for the b → cτ ν τ transition cannot be interpreted in this scenario, so that applying a 95% CL test for the ratios R D ( * ) would lead to the exclusion of the whole portion of parameter space displayed in Fig. 6. Finally, we complete this discussion by considering the parameter sets of Fig. 4, where the flavour limits discussed in the previous section were found weaker. The impact of the new channels can be read in Fig. 7 The corresponding exclusion regions in the considered regime with A t = 500 GeV again prove narrower than those considered in Fig. 4. (Note again that we have omitted the b → cτ ν τ channels, however.) Therefore, we find that the new channels tested in bsg.f and Kphys.f are typically less constraining than the older ones, which we discussed before. Limits fromB → X d γ and B 0 d → µ + µ − are found to be significant, however, and an evolution of the experimental limits or an improvement in understanding the SM uncertainties may provide them with more relevance in the future. The b → cτ ν τ transition stands apart as the tension between SM and experiment resists an NMSSM interpretation, at least in the scenarios that we have been considering here. Conclusions We have considered a set of flavour observables in the NMSSM, updating and extending our former analysis in [4]. These channels have been implemented in a pair of Fortran subroutines, which allow for both the evaluation of the observables in the NMSSM and confrontation with the current experimental results. We have taken into account the recent upgrades of the SM status of e.g. BR[B → X s γ] or BR[B 0 s → µ + µ − ] and included BSM effects at NLO. The tools thus designed will be / have been partially made public within the package NMSSMTools [5]. We observe that the bounds on the NMSSM parameter space driven by BR[B → X s γ] or BR[B 0 s → µ + µ − ] have become more efficient, which should be considered in the light of the recent evolution of the SM status and/or the experimental measurement for both these channels. In particular, the large tan β region is rapidly subject to constraints originating from the flavour sector. Similarly, the light pseudoscalar scenario is tightly corseted due to the efficiency of Higgs-penguins in the presence of such a light mediator. Among the new channels that we have included, we note the specific status of the b → cτ ν τ transition, where the discrepancy between SM and experiment seems difficult to address in a SUSY context. Other channels of the flavour-changing sector may prove interesting to include in the future. Note e.g. the current evolution in the B → K ( * ) l + l − observables.
11,431
2015-12-07T00:00:00.000
[ "Physics" ]
Characteristics and formation mechanism of Carbonate buried hill fractured-dissolved reservoirs in Bohai Sea, Bohai Bay basin, China Carbonate buried hill madeexploration breakthroughs recently in the offshore Bohai Bay Basin, China, but the plane distribution of the buried hill reservoirs are unclear due to the highly heterogeneous. Taking the CFD2 oilfield as an example, based on core, thin section, seismic, and well logging data, the characteristics of Carbonate buried hill reservoirs in the study area were clarified, the formation mechanism of the reservoirs was discussed, and the development model of the reservoir was established. The results show that the reservoirs are mainly fractured-dissolved reservoirs, and the formation of the reservoirs is mainly related to structural fractures and fluid dissolution along the fractures. The NWW-trending structural fractures were formed under the control of the Indosinian compression, and the NEE-trending structural fractures were formed under the control of the Yanshanian strike-slip transpression. Dolomite is more brittle than limestone and is the main lithology for forming effective fractures. Structural fractures provide favorable channels for atmospheric water dissolution. The C and O isotope values reveal that at least two stages of dissolution have occurred in the study area which are supergene karstification and burial karstification. A model of the fractured-dissolved reservoir under the control of “structure-lithology-fluid” was established. This model highlights that the structural fractures formed by tectonic activities are crucial to reservoir development, and lithology is the internal factor controlling reservoir distribution. Dolomite exhibits the compressive strength of only half of the limestone, and it is the dominant lithology for reservoir development. The dissolution of atmospheric water in the two stages along the fractures greatly improved the physical properties of the reservoirs, and it is the guarantee for the development of effective reservoirs. Introduction Carbonate is an important lithology for hydrocarbon accumulation. Globally, nearly 50% of oil reserves and 25% of gas reserves are endowed in carbonate reservoirs (Li et al., 2016;Ye et al., 2022a). Previous studies have confirmed that karstification is an important factor in controlling the development of carbonate reservoirs. Currently, a variety of development models of karst reservoirs have been established, such as the karst model, quasi-contemporaneous karst model, and fault-karst reservoir development model (Gabrovsek et al., 2004;Zhao W. Z. et al., 2012;Xie et al., 2013;Dong et al., 2021;Guo et al., 2021). These models have guided the exploration discoveries of many large-medium oil and gas fields. Carbonate buried hill is an important exploration domain in rift basins in eastern China, and it has been widely concerned by petroleum geologists for a long time (Xiao et al., 2018;Hua et al., 2020). Many scholars have worked a lot on Carbonate buried hill reservoirs and realized that these reservoirs correspond to diverse storage spaces and many controlling factors, including lithology, lithofacies, tectonic activity, hydrothermal fluid, and weathering, which also induce complex reservoir-forming mechanisms and distribution models of Carbonate buried hills (Yu et al., 2015;Li et al., 2016;Wang et al., 2016;Ye et al., 2020;Liu C. F. et al., 2022). Therefore, it is of great significance to clarify the characteristics of Carbonate buried hill reservoirs, define the formation mechanism of reservoirs, and establish reservoir development models that conform to geological conditions for the exploration and development of oil fields. The Bohai Bay Basin on the North China Craton in eastern China is a typical Cenozoic rift basin. The North China Craton was covered in the extensive epicontinental sea during the Paleozoic period, with ˃ 1,000 m thick marine carbonate rocks deposited (Ye et al., 2022b). These carbonate strata formed abundant buried hill traps during the Mesozoic-Cenozoic tectonic activities, which are important oil and gas exploration domains in the Bohai Bay Basin. Since the first discovery of the Lower Paleozoic Carbonate buried hill in Yihezhuang of the Jiyang Depression (Wang and Li, 2017), the Renqiu buried hill and Qianmiqiao buried hill have been confirmed successively, proving the great exploration potential in the Lower Paleozoic in the Bohai Bay Basin (Jin et al., 2001;Zhao X. Z. et al., 2012;Dong et al., 2015;Ma et al., 2020). As a key factor to control hydrocarbon accumulation, the reservoir has always been the focus of research on Carbonate buried hill (Ni et al., 2010;Tang et al., 2013;Zhao et al., 2013;Zhang et al., 2014). In the offshore Bohai Sea, the most active tectonic area in the Bohai Bay Basin, the Lower Paleozoic reservoirs develop many types and multi-scales of natural fractures as a result of multiple tectonic movements and diagenetic processes in the geological period (Li et al., 2023). The existence of fractures is crucial to reservoir development and hydrocarbon enrichment in the Lower Paleozoic (Ye et al., 2022a). In addition, the later dissolution of multi-source fluids along the fractures greatly improved the physical properties of the reservoirs, forming extensive fractured-dissolved reservoirs (Hua et al., 2020). However, the formation mechanism of structural fractures and the later superimposed dissolution mechanism of diagenetic fluids in the complex superimposed basin of the Bohai Sea are still unknown, which seriously restricts the prediction of fractured-dissolved reservoirs. In this paper, the Lower Paleozoic buried hill reservoirs in the northwest structural belt of the Shaleitian rise in the Bohai Sea were characterized using the drilling, well logging, core and seismic data, combined with thin section observation and C and O isotope analysis. Specifically, the formation mechanism of fractures and the source of diagenetic fluids were discussed, and the reservoir development model was established finally. The established development model of the fractured-dissolved reservoir is the improvement of the simple karst model, and it can be referential for understanding the development of carbonate reservoirs in regions with strong tectonic activity. Geological setting The Bohai Bay Basin is a typical Mesozoic-Cenozoic continental rift basin, with an exploration area of about 20 × 10 4 km 2 . The study area underwent a strong transformation of Indosinian and Yanshanian movements in the pre-Cenozoic period, forming abundant NWW-and NEE-trending basement faults (Xu et al., 2019;Ye et al., 2022b). These basement faults were activated in the Cenozoic period and further controlled the Cenozoic basin framework. The Bohai Sea in the central-east part of the basin includes 14 sags and 13 rises, which are alternating in the region. There developed many faults in the Cenozoic basin, and those faults controlled the formation and evolution of the basin and hydrocarbon distribution (Zhou et al., 2010). In the Cenozoic, the Bohai Sea mainly experienced the Paleocene-Eocene rifting, the Oligocene fault depression, and the Miocene-Pliocene depression (Tang et al., 2008;Zhang et al., 2017). The northwest structural belt of the Shaleitian rise is located in the western part of the Bohai Sea, on the northwest slope of the Shaleitian rise, and adjacent to the Nanpu and Qikou hydrocarbon-rich sags ( Figure 1A). With 2 × 10 3 km 2 , the structures mainly trend in near NWW direction, and are modified by near NNE strike-slip fault system. The CFD2 Carbonate buried hill oilfield is an anticline structure formed by reverse faults on the slope belt and contains oil and gas mainly in the Lower Paleozoic strata ( Figures 1B, C). Vertically, from south to north, in the northwest structural belt of the Shaleitian rise, the Archean, Lower Paleozoic, and Mesozoic strata are exposed. The Archaean is dominated by migmatitic granite and granite gneiss. The Cambrian of the Lower Paleozoic is composed of interbedding marine carbonate and mudstone, mainly thick mudstone intercalated with thin dolomite or limestone. The Ordovician of the Lower Paleozoic is a marine carbonate deposit, mainly massive limestone, and dolomite intercalated with thin marl layers. The Mesozoic has the most complex lithology, which is a set of continental clastic rocks containing volcanic rocks, only developed in the low-lying area of the Nanpu sag. The Frontiers in Earth Science frontiersin.org 03 Data and method The samples in this study were taken from the exploration wells (e.g., CFD2-A, CFD2-B, CFD2-C, CFD2-D, CFD2-E, and CFD2-F) in the Lower Paleozoic Carbonate buried hill in the Shaleitian rise. The data include drilling core data, sidewall coring data, 3D seismic data, conventional well logging curves, and imaging logging curves. In order to characterize the fractures, we observed 73 m drilling cores and described 214 sidewall cores of 6 wells. We prepared thin sections and observed cast for each sidewall cores to identify the main types of storage space in the reservoirs. We also observed fluorescence on key thin sections to determine hydrocarbonbearing properties and the effectiveness of fractures. We analyzed the electric imaging logging data of each exploration well to determine the strike and dip angle of the effective fractures. In order to determine the geological factors controlling the formation of fractures, we used the geological profile interpreted by seismic data to restore the evolution process of buried hills. Based on the analysis of regional tectonic evolution, we analyzed the main fracture initiation events and their role in forming fractures. The carbon and oxygen isotopes of these samples were measured using a mass spectrometer (2019.FLS0167/SN09609D) in the Experimental Center of China National Offshore Oil Corporation (Ye et al., 2022a). In combination with the C and O isotope measurements of fracture fillings, we discussed the source of diagenetic fluids. In addition, we measured the physical properties of 105 typical samples, so that we can quantitatively determine the development of the reservoirs. In order to reveal the control effect of tectonic activity and fracture distribution, based on the structural map of the top of the buried hill, the maximum stress of the stratum was obtained by finite element simulation. Reservoirs The analysis of drilling cores, sidewall cores, and fluorescence thin sections reveal that there are several types Numerous structural fractures can be seen on the thin section, with the characteristics of mutual cutting, indicating at least two stages of fracture initiation events ( Figure 2D). Microscopically, many fractures are obverse to have been filled with calcite in the late stage ( Figure 2D). The fillings are obviously different, and can be divided into silty-fine crystal filling and coarse crystal filling. There are also many unfilled Frontiers in Earth Science frontiersin.org fractures, along which strong dissolution characteristics can be seen, which is consistent with the observation results of drilling cores ( Figures 2E, F). Asphalt can be seen in the fractures in the thin section ( Figure 2G). In the fluorescence thin section, abundant oil and gas shows can be seen, which indicates that both structural fractures and dissolved pores in the study area are the main effective storage spaces ( Figures 2H, I). The physical properties of 105 sidewall cores were measured. The porosity is 0.1%-9.3%, concentrated at 1%-3%, and a lot of samples exhibit the porosity ˂ 5% ( Figure 3A). The permeability is mainly ˂ 10 mD, and a lot of many samples exhibit the permeability of 0.01-1 mD. The cross plot shows that the porosity and the permeability are less correlated (correlation coefficient was 0.2208), indicating that the reservoir is mainly a fractured-porous medium ( Figure 3B), which is consistent with the observation results of drilling cores and thin sections. Fractures Structural fractures are extremely developed in the study area. Numerous structural fractures are observed on cores and thin sections. The occurrence and dip angle of the structural fractures were depicted by electric imaging logging. It is found that there are mainly two groups of fractures in the CFD area: NEE, and NWW, by tending, with the NEE-trending fractures in dominance. The structural fractures show an obvious sine dark curve on the electric imaging log, and medium-sized pores/vugs can be seen along the structural fractures. In addition, some fractures are relatively wide and may be dissolved by atmospheric water (Figures 4A, B). The fractures are quite different in dip angle: the NWW-trending fractures have the dip angles of 30°-80°, with an average of 48°; the NEE-trending fractures have the dip angles of 50°-80°, with an average of 67°. Clearly, the NEE-trending fractures show higher dip angles than the NWW-trending fractures, possibly suggesting that they were formed in different tectonic settings (Figures 4C-F). Fracture fillings Fracture fillings are important for revealing diagenetic fluids. There are a huge amount of calcite fillings in the structural fractures in the study area. The C and O isotopes of fillings in fractures and surrounding rocks were analyzed to determine the source of diagenetic fluids. It is found that the C isotope of surrounding rocks is generally positive and Table 1]. Frontiers in Earth Science frontiersin.org occasionally negative, mostly greater than −1‰, and the O isotope is between −18‰ and −4‰, representing the isotopic characteristics of seawater during sedimentation. The fracture fillings exhibit significantly different C and O isotopes from the surrounding rocks. According to the crystallization degree, they can be divided into silty-fine crystal dolomite and limestone, and coarse crystal dolomite and limestone, which are different in the C and O isotopes. For the silty-fine crystal dolomite and limestone, the C isotope ranges from −2‰ to 0‰, and the O isotope from −18‰ to −8‰. For the coarse crystal dolomite and limestone, the C isotope ranges from −4‰ to −2‰, and the O isotope from −18‰ to −8‰. It can be seen that the coarse crystal fillings are equivalent to the fine crystal fillings in the O isotope, but display a lower C isotope generally, possibly indicative of different diagenetic environments (Figures 5, 6; Table 1). Formation mechanism and distribution of fractures The formation and distribution of structural fractures are often related to strong tectonic activities (Ye et al., 2022b;Guo et al., 2022). In the study area, the fractures were mainly formed due to the Indosinian and Yanshanian tectonic activities, with the former inducing NWW-trending fractures and the latter inducing NEE-trending fractures (Figure 4). Based on the geological profile, the tectonic evolution process of the CFD area was restored. This area deposited a set of epicontinental sea carbonate during the early Paleozoic and a set of transitional deposits during the late Paleozoic ( Figure 7A). The collision between the South China Plate and the North China Plate in the Indosinian induced a series of NWW-trending thrust faults in the study area, and also caused the Paleozoic fold deformation ( Figure 7B). The NWWtrending faults stopped their activity in the Yanshanian ( Figure 7C). At this time, a group of NEE-trending strike-slip faults was formed, and they were mainly featured by sinistral compression, leading to the uplift and fold deformation of the strata near the fault belt (Figure 8). Since the Himalayan, the NWW-trending faults had been reactivated and controlled the deposition of Paleogene, and the buried hill was finally buried and shaped ( Figures 7D, E). The multi-phase tectonic activities in the buried hill provided favorable external stress for the development of structural fractures. During the Indosinian compression, the CFD area was close to the thrust fault belt, and a large number of NWW-trending fractures were developed near the NWW fault belt. For example, the imaging logging data of Well CFD2-F and Well CFD2-C reveal that the NWW-trending fractures have low dip angles, possibly because they were formed in the thrust nappe setting. During the formation of the strike-slip fault, intense fracturing deformation might occur along the fault belt, thus deriving a great number of structural fractures Liu J. S. et al., 2022;Yun and Deng, 2022). However, the distribution of fractures controlled by strike-slip faults is complex, which is related to the specific spatial shape of faults and the derived local stress field. Strike-slip faults present differences role in the distribution of fractures, in which the stress in the strong deformation zone of the strata near the strike-slip fault zone is the most concentrated and is the main development zone of fractures. The structural fractures formed by the strike-slip fault belt are mainly located inside the main fault belt, but when the trend of the strike-slip fault belt changes, the local stress field derived from it forms the "sweet spot" of fractures (Yu et al., 2014;Li et al., 2021). Field measurement shows that obvious pressure increases and pressure-releasing belts are formed at the strike-slip bending position. The pressure-increasing belt is often accompanied by terrain uplifting and the creation of numerous structural fractures, while the pressure-releasing belt mostly forms graben with few fractures (Wei, 2015). Abundant NEEtrending fractures were developed near the NEE fault belt during the Yanshanian, especially in the strong uplifting area of the buried hill caused by strike-slip faults, where the fracture density is the largest. For instance, all the fractures in Wells CFD2-D, CFD2-E, CFD2-F, and CFD2-C mainly trend in the NEE direction ( Figure 8). It is worth noting that the structural fractures induced by strike-slipping have high dip angles, and they exhibit higher effectiveness than the NWW-trending fractures. The formation of a high-quality buried hill reservoir is not only affected by external factors but also related to the physical properties of the rock itself (Hou et al., 2015;Wang et al., 2015). The compressive strength of rock determines its ability to form fractures. The lower the compressive strength, the more easily the rock forms fractures . Compressive strength tests on different lithologies confirm that the compressive strength is 49 MPa for dolomite, 70.74 MPa for limy dolomite, 75.4 MPa for dolomitic limestone, and 104.9 MPa for limestone; that is, with the increase of dolomite content in carbonate, the compressive strength tends to decrease gradually ( Figure 9B). It is worth noting that the compressive strength of dolomite is only half of that of limestone, that is, the stress required for dolomite to break and form structural fractures is only half of that of limestone. This means that the higher the content of dolomite, the easier the rock is to break and form structural fractures. The linear density of fractures in intervals with different dolomite contents was calculated by imaging logging data, element well-logging data, and mud logging data. The results show that, except for a small amount of limestones (low dolomite content) that exhibit high fracture density, the overall fracture density increases significantly with the increase of dolomite content ( Figure 9A), which also confirms the control of lithology on fracture distribution in the study area. A lot of drilling data verify that at the structural high near the fault belt, both limestone and dolomite can form high-quality reservoirs, while at the structural low and inside the buried hill, high-quality reservoirs are mainly developed in intervals with high dolomite content. Essentially, in the interior of the buried hill and at the structural low, where the stress is weak, the limestone is difficult to form fractures, while the dolomite with lower compressive strength is broken to become reservoirs. Fracture dissolution mechanism and reservoir development model Structural fractures provide good pathways for fluid flow, which is the basis for the development of karst reservoirs. The negative carbon isotope anomaly is mainly related to the influence of organic carbon; The negative oxygen isotope anomaly is mainly affected by the leaching of hydrothermal solution or atmospheric water. According to the observation of fracture fillings and the analysis results of C and O isotopes, at least two stages of karstification have occurred in the study area. In the first stage, the fillings were mainly silty-fine crystal carbonate, with the C isotope ranging from −2‰ to 0‰ and the O isotope ranging from −16‰ to −8‰, which may represent the atmospheric freshwater dissolution in the supergene period. In the second stage, the fillings were mainly coarse crystal carbonate, with the C isotope ranging from −4‰ to −2‰ and the O isotope ranging from −18‰ to −8‰, indicating that the diagenetic temperature was higher than that in the first stage, which may be related to karstification during burial. Frontiers in Earth Science frontiersin.org 10 Combined with the tectonic evolution process and the above analysis, the development model of Carbonate buried hill reservoirs in the CFD area was established. In the study area, fractured-dissolved reservoirs are dominant. The early structural fractures provide flow pathways for later fluid dissolution, which is the key to controlling the development of reservoirs. The distribution of high-quality reservoirs is jointly controlled by structure, lithology, and fluid. Before Indosinian period, the tectonic setting was relatively quietly ( Figure 10A). During the Indosinian, nearly NWW folds were formed in the study area due to SN compression, and intensive fracture belts were formed near the Indosinian thrust fault belt. Meanwhile, the first supergene karstification occurred ( Figure 10B). Subsequently, the NEE intensive fault belt was formed near the strike-slip faults by the strike-slip transpression process during the Yanshanian. At this time, the Paleozoic buried hill was still exposed and the supergene dissolution continued ( Figure 10C). Under this background, the fracture fillings showed a relatively high C isotope. In the Himalayan, due to the continuous extension, the Paleozoic strata experienced burial karstification along bedding on the slope, leading to more negative C isotope of the fillings, due to large burial depth ( Figure 10D). With the continuous burial in the Cenozoic, the O isotope became generally negative, and finally the present reservoir pattern was formed ( Figures 10E, F). Based on the above analysis, the development model of fractureddissolved reservoirs was established. The stress is most concentrated near the Indosinian and Yanshanian faults, which are the dominant areas with fractures developed. Dolomite is more brittle than limestone, and may more easily form structural fractures in the process of structural transformation, thus it is the lithology The stable tectonic setting during the pre-Indosinian period; (B) The fracturing and exposed dissolution during the Indosinian thrust folding process; (C) The fracturing and exposed dissolution during the Yanshan strike-slip activity; (D,E) Differential burial and dissolution process during the Paleogene ;(F) Deep bury during the Neogene. FIGURE 11 Development model of fractured-dissolved reservoirs in CFD area. Frontiers in Earth Science frontiersin.org 12 Conclusion The Carbonate buried hill reservoirs in the CFD area are mainly fractured-dissolved reservoirs. The formation of these reservoirs is mainly related to structural fractures and fluid dissolution along the fractures. There are two groups of structural fractures: NWW and NEE, by trending. The C and O isotopes reveal that the diagenetic fluid is mainly atmospheric fresh water. The formation of NWW-and NEE-trending structural fractures is respectively controlled by Indosinian compression and Yanshanian strike-slip transpression. Dolomite is more brittle than limestone and is the main lithology for forming effective fractures. The structural fractures provide favorable channels for the dissolution of atmospheric water. The C and O isotopes reveal that at least two stages of dissolution have occurred in the study area: 1) the supergene karstification, when fine-silty crystal carbonate was mainly filled; and 2) the burial karstification, when coarse crystal carbonate was mainly filled. The development model of fractured-dissolved reservoirs under the control of structure, lithology, and fluid was established. This model emphasizes that structural fractures formed by tectonic activities are the key to reservoir development, and lithology is the internal factor controlling reservoir distribution. Dolomite exhibits compressive strength as only half of the limestone, and it is the dominant lithology for reservoir development. The two phases of dissolution of atmospheric water along the fractures greatly improved the physical properties of the reservoirs, which is the guarantee for the development of effective reservoirs. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. Author contributions ZH: come up with ideas; HY: project guidance; TY: writing; HD: data processing; JG: drawing figures; SL: data collecting; SX: language check; CM: funding. Conflict of interest Authors ZH, HY, TY, HD, JG, SL, and SX were employed by Tianjin Branch of CNOOC (China) Co., Ltd. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
5,601.4
2023-03-02T00:00:00.000
[ "Environmental Science", "Geology" ]
On the radion mediation of the Supersymmetry breaking in N=2, D=5 Supergravity Orbifolds We discuss the on-shell N=1 Supersymmetric coupling of brane chiral multiplets in the context of N=2, D=5 Supergravity compactified on $S_1/Z_2$ orbifolds. Assuming a constant superpotential on the hidden brane we study the transmission of the supersymmetry breaking to the visible brane. We find that to lowest order in the five dimensional Newton's constant $k_5^2$ and gravitino mass $m_{3/2}^2$ the spinor field of the radion multiplet is responsible of inducing positive one-loop squared masses $m_{\phi}^2 \sim {m_{3/2}^2} / (M_{Planck}^2 R^2)$ to the scalar fields which are localized on the visible brane with $R$ the length scale of the fifth dimension. Considering a cubic superpotential on the visible brane we also find that non-vanishing soft trilinear scalar couplings $A$ are induced given by $A=3m_{\phi}^2/m_{3/2}$. Introduction During the last years a lot of effort has been expended in the study of the physics of extra dimensions. Especially the assumed brane picture of our world has attracted much interest, mainly because of the new insight offering in particle physics beyond the standard model, in cosmology and the interplay between them. One of the main topics that the brane world models have been invoked for is the hierarchy problem [1,2] which is connected with the origin of the mass scale of the electroweak symmetry breaking [3][4][5]. On the other hand these models have their origin in String Theory where Supersymmetry is a basic ingredient [6][7][8]. One of the main issues that may be addressed in supersymmetric brane world models is the mediation of the supersymmetry breaking and the determination of the soft-breaking terms appearing in the corresponding four-dimensional low energy theories. These models may be constructed by orbifolding a supersymmetric five dimensional theory with a compact extra dimension. The supersymmetry breaking is triggered on the hidden brane, which in some sense replaces the hidden sector of four spacetime dimensional models [9], and through the bulk is communicated to the visible brane [10][11][12][13][14][15][16][17][18][19][20]. This transmission results to finite one-loop mass corrections for the scalar fields that live on the visible brane. The induced corrections have been already calculated in [20][21][22][23][24] and result to tachyonic masses, although it has been claimed that a full treatment of the radion multiplet may turn this picture yielding positive masses squared. In this work we study the transmission of the supersymmetry breaking in N = 2, D = 5 Supergravity [25][26][27] compactified on a S 1 /Z 2 orbifold by working directly in the on-shell scheme. The orbifolding determines two branes, the visible brane located at x 5 = 0 and the hidden one at x 5 = πR. We construct the N = 1 supesymmetric couplings of the brane chiral multiplets with the bulk fields and found that they are determined by a Kälher function reminiscent of the no-scale model [28,29]. The Lagrangian derived in this way describes the full brane-radion coupling at least to order k 2 5 , which is adequate for our purposes. Assuming a constant superpotential on the hidden brane we calculated at one-loop the soft scalar masses squared m 2 ϕ , induced by the mediation of the radion multiplet. These were found to be positive. Moreover by considering a typical cubic superpotential W on the visible brane we found that trilinear scalar couplings are also induced which are non-vanishing. Brane Multiplet Coupling In a relatively recent publication [30] we addressed the problem of the coupling of N=1 multiplets on the boundary branes in the context of five-dimensional N=2 Supergavity orbifolds. For chiral brane multiplets we derived the coupling to all orders in the gravitational coupling constant with the bulk gavitational fields, the graviton and the gravitino. For the derivation we worked in the on-shell scheme using Nöther procedure. The alternative of using off-shell formulation proves too tedious due to the fact that the theory is described by numerous auxiliary fields. Besides in the on-shell scheme the possible gaugings have been classified which is essential when one wants to promote the theory to a unified theory into which the Standard Model is embedded. In that work we did not derive the full couplings of the radion multiplet to the brane fields. The radion multiplet propagates in the bulk but it also consists of even fields able to couple to the brane fields. The first order term in the five dimensional gravitational constant, already derived in [30], was found to be 1 √ 6 (−J (ϕ) µ + 1 2 J (χ) µ ) F 0 µ5 but this by itself is not sufficient for the study of the mediation of the supersymmetry breaking from the hidden to the visible brane. One may continue using Nöther procedure in order to complete the brane with the radion multiplet couplings. However this task turns out to be cumbersome after a few steps and one may seek an alternative way to derive these terms systematically using the standard knowledge of N=1, D=4 Supergavity. This is undertaken in this work. The main observation is that the restriction of the radion fields T ≡ 1 √ 2 ( e˙5 5 − i 2 3 A 0 5 ), χ (T ) ≡ −ψ 2 5 on the brane form a chiral multiplet having lowest order transformation laws given by Note the appearence of the combination F 0 µ in this transformation law. The couplings of the radion superfield with the other brane and gravity fields will be described by a Kähler function, say F , in the usual manner encountered in N=1, D=4 supergravity [31]. For convenience one can split this function as F = N (T, T * ) + K(T, ϕ, T * , ϕ * ) , where the first term describes the restriction of the five dimensional supergravity on the brane, which survives even in the absence of brane multiplets, and the second is associated with the presence of brane multiplets. From our previous analysis [30] we concluded that the form of the second function is K(T, ϕ, T * , ϕ * ) ≡ ∆ (5) K(ϕ, ϕ * ), where ∆ (5) ≡ e 5 5 δ(x 5 ). For the determination of the first function N (T, T * ) we observe that by applying Nöther's approach in the most natural and plausible manner, avoiding as much as pos-sible mathematical complexities, it turns out that the restriction of the five dimensional supergravity action on the branes does not take its familiar 4-dimensional form. In fact all terms in the gravitational part of the action involve the determinant of the five-dimensional metric e (4) e˙5 5 , instead of e (4) , and besides there are no kinetic terms for the real part e˙5 5 of the scalar field T and the spinor field ψ 2 5 of the chiral radion multiplet. The same situation can be also encountered in ordinary N=1, D=4 supergravity, where for a chiral multiplet (S, f S ) for instance, by appropriate Weyl rescalings, e m µ → e f e m µ with e 2f = √ 2Re(S), followed by appropriate shifts in the gravitino field, one can eliminate the kinetic terms for Re(S), f S , having as an effect the appearance of e (4) Re(S) instead of e (4) in the Lagrangian. From this it becomes obvious that one needs the inverse transformations to be implemented in N = 2, D = 5 Supergravity Lagrangian, derived by applying Nöther's proceedure. These are given bŷ , with e 2f = e˙5 5 , and are able to cast the bulk Lagrangian, and especially its restriction on the branes, in the typical form reminiscent of the N = 1, D = 4 Supergravity in which the kinetic terms for e˙5 5 and ψ 2 5 are present. In Eq. (1) the hatted fields are those describing the original N = 2, D = 5 action. Then in terms of the transformed (unhatted) fields the couplings of the brane fields are those of the N=1 , D=4 supergravity derived from the Kähler function At this point let us remark that in this Lagrangian one has to substitute ∂ µ A µ since these combinations actuall appear in the 5-D supersymmetric transformations of the fields. The interaction of the brane fields with the radion multiplet stems from the following Lagrangian, where for simplicity we do not present the four-fermion terms, A non-trivial superpotential W (ϕ), giving rise to Yukawa and potential terms, can be easily incorporated [30] given by In the Lagrangians above and in what follows ψ µ stands for ψ 1 µ , the even gravitino field which lives on the visible brane and the bulk as well. Some comments concerning the above Lagrangian are in order: i. The part L 0 contains both the terms describing the interaction of the radion multiplet with the fields localized on the branes and also terms involving only the radion multiplet fields which live in the bulk. This is due to the particular form of the Kähler metric arising from the Kähler function F of Eq. 2. From the the bosonic and fermionic kinetic terms in the Lagrangian of Eq. (3) we get in a straightforward manner that We see that we have non-canonical kinetic terms in the bulk for the fields of the radion multiplet. From these terms only the kinetic term of the field A 5 remains if we express the action in terms of the untransformed hatted fields. Moreover we see that these terms remain on the brane multiplied by 2 3 ∆ (5) K. Cancellation of the ∆ (5) K terms in the kinetic part of lne˙5 5 can be achieved if one adds pure gravity terms and gravitino kinetic terms localized on the brane. However since this is not mandatory for the N=1 supersymmetry invariance of the brane action we choose to keep the pure supergravity part in the bulk as it appears above. Notice the presence of the term − 1 µ5 stems from the remaining terms of Eq. (3), [30]. ii. In the above formulae we have written the Lagrangian for a chiral multiplet located on the brane at x 5 = 0. Similar expression holds for the hidden brane at x 5 = πR. We have just to add a hidden part Kähler function F H = δ(x 5 − πR) (2) depending only on the hidden brane fields and the corresponding superpotential W H . In our work we will consider a constant superpotential on the hidden brane triggering spontaneous symmetry breaking of supersymmetry. iii. The extra power of the ∆ (5) prefactor multiplying the potential terms is cancelled in the first term since the inverse of the Kähler metric already includes the inverse (∆ (5) ) −1 . However it is not cancelled in the second term which is proportional to |W | 2 . Such singularities are not new and also occur in N=2 five-dimensional supersymmetries in flat space-time N=2, D=5 supersymmetries where they cure singularities arising from the propagation of the bulk fields in order to maintain supersymmetry [10]. We also remark that the negative term of the potential −3 |W | 2 flips its sign in the scalar potential if we consider the coupling with the radion multiplet. To illustrate this we consider for simplicity just one chiral multiplet on the visible brane with superpotential given by W (ϕ) = λ 6 ϕ 3 and a Kähler function K(ϕϕ * ) = ϕϕ * . Then the scalar potential up to k 2 5 order is found to be Thus despite the fact that a negative term appears in the potential nevertheless the potential turns always positive. This feature is independent of the particular forms of K and the superpotential W which depend on the visible fields. This form of the potential yields the possibility of de-Sitter metastable vacua [32]. This can be accomplished in other considerations at the cost of introducing extra D-terms in the action [33]. Soft scalar masses For the purpose of studying the transition of supersymmetry breaking we consider a constant superpotential c on the hidden brane. The corresponding Lagrangian is where ∆ (h) (5) is similar to ∆ (5) for the hidden brane. The absence of scalar potential terms is justified by the fact that in the absence of brane chiral multiplets the corresponding Kähler function is that of a no-scale model. In this case we see that mass terms for the gravitino ψ µ and the spinor field ψ 2 5 of the radion multiplet arise on the hidden brane. Moreover the kinetic term ∆ (5) K ψ 2 5 σ µ D µψ 2 5 +ψ 2 5σ µ D µ ψ 2 5 appears on the visible brane. These two terms communicate through the bulk propagation of the spinor field of the radion multiplet yielding non-vanishing masses for the scalar fields of the brane chiral multiplet. We choose K = ϕϕ * and for the calculation of the mass corrections we choose a gauge in which the bulk kinetic terms of the gravitinos are disentangled from their fifth components. That done we treat the mass terms on the hidden branes as interaction terms. This is sufficient for our purposes since we are interested in mass corrections for the brane scalar fields which are of order m 2 3/2 ∝ | c | 2 . The diagonalization of the ψ 1 µ ≡ ψ µ , ψ 2 µ and the ψ 1,2 5 bulk kinetic terms is achieved by chosing appropriately the gauge fixing term. The five-dimensional gravitino kinetic terms expanded in components are: Note that the m,5 components are mixed only through5−derivatives. In our approach we employ the following gauge fixing by adding to the bulk Lagrangian the term , Ψ m = ψ 1 m ψ 2 m whose upper components are constituted of even fields . We choose to proceed with the second choice and since we are interested in diagrams involving propagation from the hidden to the visible brane we use the pertinent Dirac and gravitino propagators in the mixed momentum-configuration space representation [5,34,35]. In this representation, and in the particular gauge with the value of ξ chosen as above, the orbifolded propagators read as: where F (p, y, y ′ ) is given by In these y, y ′ denote variables along the fifth dimension and q = −p 2 + iǫ. In order to proceed further we have to perform the shifts (9) in the relevant terms of the brane Lagrangians. That done, the pertinent visible brane terms are brought to the form while on the hidden brane we have In these P L,R = 1 2 (1 ± iγ˙5) are the chiral projection operators and C denotes the charge conjugation matrix. Calculating the diagrams, depicted in 1, which are relevant for the scalar mass terms to order | c | 2 in the supersymmetry breaking scale, we find the mass corrections to the scalar fields involved. The external momenta of the external scalar fields in these graphs have been taken vanishing. The general structure of the loops involved are d 4 p (2π) 4 dydy 1 dy 2 δ(y) δ(y 1 − πR) δ(y 2 − πR) T r [V G(p, y, y 1 )V 1 G(p, y 1 , y 2 )V 2 G(p, y 2 , y)] where the subscript i in the constant c (i) labels each graph. c (i) are given by Collecting all contributions entails to the following finite mass correction In the above expressions we have reinstated the dimensions and we have made use of the fact that m 3/2 = k 2 5 | c | /(πR). We note that the supersymmetry breaking through a constant superpotential on the hidden brane has resulted to non-tachyonic scalar masses for the brane fields. In our approach the positivity of m 2 ϕ is intimately related with the presence of the spinor field of the radion multiplet ψ 2 5 . Since the supersymmetry breaking occurs on the hidden brane this field cannot be gauged away by a transformation from the whole Lagrangian as it would be the case for an ordinary goldstino in the "unitary" gauge 3 . This feature may explain the difference from other approaches where a "unitary" gauge is adopted to set ψ 2 5 = 0 leaving only one diagram where the gravitino is the only propagating field. Notice that the corresponding diagram (f) of figure 1 yields negative contribution in our case as well. However in our treatment the rest of the diagrams, involving at least a ψ 2 5 fermion, yield contributions that render scalar masses squared positive. Trilinear soft scalar couplings For the study of the effect of the supersymmetry breaking on the trilinear scalar couplings we consider a cubic superpotential on the visible brane W (Φ) = λ 6 Φ 3 . The graphs one needs calculate for the trilinear couplings are shown in figure 2. The necessary Lagrangian terms for this computation stem from the Yukawa-type terms in the action which read To order m 3/2 the separate diagrams depicted in figure 2 yield trilinear scalar field corrections given by, where the coefficient c (i) for each graph involved is given below, c (a) = 1 12 , c (b) = 1 48 , c (c) = 1 12 Adding the separate contributions one gets a correction to the cubic potential due to the supersymmetry breaking that occured on the hidden brane given by 3 16 Therefore the induced trilinear soft scalar coupling is A = 3m 2 ϕ /m 3/2 . Discussion In the context of D = 5, N = 2, Supergavity compactified on S 1 /Z 2 we considered the N = 1 supersymmetric couplings of matter localized on one of the branes of the orbifold . The inclusion of the radion multiplet couplings is accomplished to second order in the five-dimensional gravitational constant k 5 working directly in the on-shell formalism. We studied the transmission of the supersymmetry breaking occuring on the hidden brane to the visible sector of the theory. In particular up to second order in m 3/2 we calculated the one-loop masses induced to the scalar fields on the visible brane. Proper treatment of the radion multiplet shows that this transmission results to positive squared universal masses m 2 ϕ > 0. Furthermore we found that a universal trilinear soft scalar coupling is induced due to the transmision of supersymmetry breaking given by A = 3m 2 ϕ /m 3/2 which is non-vanishing and positive. These results can be easily extended in gauged supergravities.
4,361
2007-11-09T00:00:00.000
[ "Physics" ]
Classification Freshness of Red Snapper (Lutjanus Campechanus) Based on Eye Image Using Convolutional Neural Network Indonesia is a maritime country where fish is the most widely extracted and consumed marine natural resource, one of which is snapper. Snapper contains high protein. Therefore, it is suitable for health. Red snapper or Lutjanus campechanus is one economical fish with a broad market share. Red snapper is a demersal fish group that ranks third with the most exported commodities after tuna and shrimp. In addition, snapper is one of the most common consumption fish in Indonesia. Therefore, the community needs to be able to identify the freshness of the fish. Fish freshness detection is done manually by touching the fish's body, eyes, and gills. However, this can cause accidental damage to the fish parts, which will be very detrimental. Several studies on identifying fish freshness explain that the VGGNet-16 Architecture on the Convolutional Neural Network algorithm is superior in its modeling performance. This research uses a different fish object, a red snapper object, with two different architectures from several previous studies, namely the Le-Net15 and VGGNet-16 architecture. This research focuses on the eye image carried out through the pre-processing data stage by cutting the fish body, followed by augmentation to reproduce the image data without losing its essence before training the dataset. The model will be trained using the Adam optimization method with very fresh and not fresh predictions. The experimental results of the classification of two classes of red snapper freshness using 600 fish images show that VGGNet-16 achieves the best performance compared to the LeNet-5 architecture, where the classification accuracy reaches 98.40%. Introduction Indonesia is a maritime country where the marine natural resources most often taken and consumed are fish, one of which is red snapper. Red snapper, or Lutjanus campechanus, is a demersal fish that can live in shallow to deep seas. According to the Central Statistics Agency (BPS), national Lutjanus campechanus production was recorded at 1.95 thousand tons in 2021. Lutjanus campechanus is an economically important type of fish that belongs to the demersal fish group and ranks third in terms of the largest export commodity after tuna and shrimp. In addition, Lutjanus campechanus is one of the most common consumption fish found in Indonesia, so the public needs to be able to identify the freshness of the fish. The quality of fresh fish is characterized by clear eyes, clear corneas, black pupils, convex eyes, and fresh red gills. If the quality decreases, the gills are gray, slimy, and smelly; the scales are strongly attached, shiny, and covered with clear mucus. The smell is typical of fish [1]. The level of freshness of fish is generally identified manually using eye observation, so it is challenging for the community to distinguish the fish's freshness level. In addition, the freshness of the fish can be identified by touching the fish's body, eyes, and gills, but this can cause accidental damage to the fish, which will be very detrimental. Many studies on the classification of fish freshness have been carried out, one of which uses non-destructive image processing techniques using fish skin as a focused network. The skin tissue was segmented using the saturation channel of the HSV color space model. Feature statistics were extracted in the HSV color space that provided the fish freshness degradation pattern, which was used to design a framework for fish freshness identification. The result of the maximum classification accuracy of this method is 96.66% [2]. Identification of the freshness of the gill fish tissue is also carried out with an automatic image processing approach by performing. Features have been extracted from the automatically segmented gill focal tissues using Wavelet Transform. The gill tissue of fresh fish is reddish brown. The changing color of the fish tissue indicates fish damage. In the proposed methodology, the gill focus network is taken by region of interest (ROI), the image segment that carries the *Corresponding author. Tel.: +62-823-112-66360 complete information about feature extraction. From the input RGB fish image, the gills are segmented as ROI because they have complete information because of their reddish-brown color. A Non-Destructive Technique evaluates material properties without causing damage. These discriminatory features from the experiment establish a relationship between the statistical wavelet coefficients and the freshness of stored fish [3]. In addition, the classification of fish freshness using several fish samples, namely Giant Gourami, Red Snapper Fish, and Nile Tilapia, was carried out using digital images with the K-Nearest Neighbor approach producing an average accuracy of 91.36% [4]. In addition, research using the Convolutional Neural Network (CNN) algorithm approach is currently a widely developed research topic, including identifying or classifying fish freshness. Research related to the classification of the freshness level of milkfish was carried out by comparing several architectures, namely Xception, MobileNet V1, Resnet50, and VGG16. The experimental results of the classification of two classes of milkfish freshness using 154 images show that VGG16 achieves the best performance compared to other architectures, where the classification accuracy reaches 97% [5]. The study used a Deep Convolution Neural Network (DCNN) approach to detect the freshness of sardine samples and classify fish samples as fresh fish or rotten fish. The automatic detection system was implemented, evaluated, and obtained results of 99.5% accuracy, 96.2% sensitivity, 92.3% specificity, 92.6% PPV, 96% NPV, and 94% f1score. Using several stages, including pre-processing data, namely Image Rescaling Color Transformation, then the distribution of testing data and training data, then classification using the Deep CNN approach [6]. Another study that implemented fish freshness detection using a convolutional neural network (CNN) approach was carried out to detect goldfish freshness. A VGG-16 architecture was applied to extract features from FSH images automatically. Then, the developed classifier block is constructed by dropout, and a solid layer is used to classify the FSH image. The results indicate a classification accuracy of 98.21%, and the conclusion is that the CNN-based proposal has lower complexity with higher accuracy than traditional classification methods [7]. In another study on freshness detection using fish samples, Nile Tilapia employs an automated method for classifying fish freshness based on a combined deep learning model and image processing. The process extracts features using VGG-16 neural network architecture, and bi-directional long-short-term memory is used to build a machine learning model. The proposed model has achieved 98% accuracy in testing [8]. This study aims to develop software to read and analyze fisheye images and then automatically predict whether the image is fresh fish or not fresh fish, using two different architectures from previous studies, namely the LeNet-5 and VGGNet-16 architectures. This experiment uses the red snapper object, which consists of image acquisition, pre-processing image, augmentation, and utilizing the holdout method. Figure 1 shows this study's system design, which consists of Image Acquisition, Pre-processing data, Augmentation classification using Convolutional Neural Network Algorithm Performance analysis method of classification, and Algorithm Performance Result. Implementation of fish freshness classification uses Python programming language to create models and is assisted by the Tensorflow library, which is one of the most famous Python libraries for creating Deep Learning models. Image acquisition Image data of Lutjanus campechanus was obtained from the Dulan Pokpok Fisheries Port, Jl. Yos Sudarso, Dulan Pok-Pok Village, Wagom Village, Kec. Fak-Fak, Fak-Fak Regency. The image of the fish is taken from various angles using the camera Hp Iphone 7+ Dual 12 MegaPixel full HD camera specifications with a screen resolution of 1920 x 1080 pixels. A sample of fish image data was taken during April-August 2021. The data obtained were 300 images of very fresh fish and 150 images of Not Fresh Fish. A sample of image data for fresh Lutjanus campechanus can be seen in Fig. 2, while the sample data for Lutjanus campechanus that is not fresh can be seen in Fig. 3. Pre-processing image Pre-processing an image is a step to get input data of a Lutjanus campechanus image for the classification process by cutting the image. The process is done after getting some Lutjanus campechanus image data at the acquisition stage, then doing the cutting process to remove unnecessary objects, namely the fish's body. The research focuses on the red snapper eye object. The image-cutting process gives different image resolution results. Sample image data of a Lutjanus campechanus that has been preprocessed image can be seen in Fig. 4 Augmentation Data augmentation is the process of reproducing an image without losing its essence [9]. Artificially augmentation is a technique to create new training data from existing training data. Data augmentation aims to expand the training data set to improve CNN performance and prevent over-fitting problems [10]. Augmentation is carried out only on not fresh fish data because the data obtained is less than fresh fish data, so the available data is not balanced. In the data augmentation process, traditional transformations are used, namely reflection and color transformation. These techniques are some of the most popular augmentation techniques because the method is easy to understand and has proven to be fast, reproducible, and reliable. Besides that, the implementation code is relatively easy and available for download with most deep learning frameworks [11]. Augmentation implementation is carried out using the Keras Library deep learning through the Image Data Generator class. Three techniques are used in this study, namely random brightness, one type of augmentation; Color transformation produces 50 new data types; horizontal flip and vertical flip, which is a type of reflection in the traditional transformation technique, each producing 50 new data. Image data non-fresh red snapper was 150 fish. After augmentation, the data obtained was 300 non-fresh red snapper image data, so the training data is 300 fresh Lutjanus campechanus and 300 Not fresh Lutjanus campechanus. Augmentation architectures of Convolution Neural Network (CNN) CNN is a supervised deep-learning tool. This algorithm is acceptable for multi-class classification and binary classification. CNN is often used to solve various pattern and image recognition problems. Deep learning approaches are effective and suitable for visuals [12]. The CNN model is a combination of the following types: convolutional layers, pooling layers, fully connected layers, and fully connected layers that extract features from the input, minimize the size for computational performance and classify an image respectively [10]. This study uses CNN architecture. There are LeNet-5 and VGG-16. The two architectures used are described below: Hyperparameter CNN Hyperparameter is a variable that determines how a model is trained. In this experiment, the researcher also set the CNN hyperparameter, as presented in Table 1 We made adjustments to hyperparameters during the experiment as follows: The number of neurons in the fully connected layer is 1024, the dropout is 0.1, the optimizer is ADAM, the learning rate is 1e-5, the loss function is binary cross-entropy, the epoch is 100 times, and the batch size is set to 35. LeNet-5 Architecture Neural Network Architecture was designed by Yann Lecun, Leon Bottou, Yosuha Bengio, and Patrick Haffner for handwriting and printing machine character recognition in 1998 called Lenet-5 [13]. LeNet-5 has eight layers, which are five convolution layers and three fully connected layers. Each unit has 25 inputs. The unit in the first hidden layer receives input from the 5×5 area. The input image is passed to the first hidden layer. This local area of the input image is called the unit receptive field. The unit's output is stored in the same location on the feature map. Various feature maps are generated from different weight vectors applied to the same input image. The features can be extracted from the obtained feature map. Sub-sampling has been described in the second layer. The number of map features obtained after sub-sampling is the same as that obtained after convolution. Here in the 2×2 sub-sampling layer, the area is taken as input and calculated as the average of the four inputs, multiplied by the trainable coefficient and adding trainable bias, giving it to the sigmoid function. An increase in the number of feature maps can be observed as the spatial resolution decreases layer by layer. Learning is carried out using the backpropagation method [14]. Table 2 shows the CNN LeNet-5 Architecture Network Layer used to implement fish freshness classification. There is a difference from the architecture in the output layer with a size of 2 classes because the output classification in this study only uses two classes, namely, Fresh Fish and Not fresh Fish. VGGNet-16 Architecture Convolution input layer 1 using a standard image size of 224 x 224 RGB VGGNet-16 has 16 layers, namely 13 convolution layers and three fully connected layers. VGGNet-16 uses the block concept to form a convolution layer, each of which has a size of 3 x 3 and a stride layer of 1. At the end of the block, a max pooling layer of size 2 x 2 and stride 2 of 16 is used. The first convolution input layer is modified to 50 x 50 because of the large amount of processed data, so it requires a heavy training process. The solution is to reduce the resolution of the input image in the training and testing process. The researcher modified it using the VGGNet-16 concept and produced a convolution neural network model with a modified VGGNet-16 architecture. Table 3 shows Network layer Arsitektur CNN Modified VGGNet-16. Validation Holdout The validation process is fundamental to do. The goal is that every piece of data can be used as training and experimental data. There are several model validations, one of which is Holdout validation [15]. Holdout validation is a dataset distribution where the data will be divided into testing data and training data. For example, if 0.2, then 20% of the data is used for testing and the rest for training data, which is 80%. In this study, the holdout validation method is used, the simplest method that takes the original dataset and randomly divides it into two sets: the dataset into "training" and "testing" sets. The holdout method was applied to all trials conducted using deep learning (CNN), which used 80% of the 480 data for training and the remaining 20% of the 120 data for testing. LeNet-5 architecture training performance The performance of the model in the LeNet-5 architecture training process is based on hyperparameters and the Network Layer Architecture of CNN. The results showed that the highest training data accuracy reached 95.78% in the 100th epoch, while the lowest training data accuracy resulted in the 20th epoch was 87.77%. From 20 to 100 epoch, there is a rapid change in accuracy. The results of the LeNet-5 architectural training performance can be seen in Table 4. The graphics of Train-Tess Accuracy and Train-test Loss can be seen in Fig. 5. Training performance of the VGGNet-16 architecture The VGGNet-16 architecture training process model is based on the parameters and the Network Layer Architecture of CNN. The results showed that the highest training data accuracy reached 98.40% in the 100th epoch, while the lowest training data accuracy in the 20th epoch was 94.10%. From epoch 20 to epoch 100, there is a rapid change in accuracy. Table 5 Show The results of the VGGNet architectural training performance, Figure 6 shows the train-tess accuracy and train-test loss. Testing performance of the LeNet-5 architecture After training using LeNet-5 and VGGNet-16, the best model was obtained based on predetermined hyperparameters. The modeling was tested on new data (that had not been previously trained) to determine the model's performance. The amount of tested data is 40 fish images, consisting of 20 images of fresh fish and 20 photos of not fresh fish. The test is to see how much accuracy is obtained from the model generated using LeNet-5 and VGGNet-16. The results of the LeNet-5 test of Lutjanus campechanus with a fresh label were 20, with the detection results of 14 fresh Lutjanus campechanus and six nonfresh Lutjanus campechanus. The LeNet-5 test of Lutjanus campechanus with the label not fresh amounted to 20, with the detection of not fresh 13 and 7 fresh Lutjanus campechanus. The following table shows the LeNet-5 red snapper test results, fresh and not fresh. Testing performance of the VGGNet-16 architecture After conducting training using LeNet-5 and VGGNet-16, the best model was obtained based on predetermined parameters, and it was tested on new data to determine the model's performance. The amount of data is 40 fish images, consisting of 20 images of fresh fish and 20 images of non-fresh fish, to see how much accuracy is obtained from the model generated using LeNet-5 and VGGNet-16. The results of the VGGNet-16 test of red snapper with a fresh label were 20, with the detection results of 15 fresh Lutjanus campechanus and five not fresh Lutjanus campechanus. The LeNet-5 test of Lutjanus campechanus with the label not fresh amounted to 20, with the detection of not being fresh amounted to 15 red snappers and five fresh Lutjanus campechanus. The following table shows the results of the VGGNet-16 red snapper testing, fresh and not fresh. LeNet-5 and VGGNet-16 model testing results using new data The following explains the classification performance used in this study by finding the value of performance measurement: Table 8 shows the prediction results of LeNet-5 and VGGNet-16. The results of the modeling test, the accuracy value obtained is the highest accuracy value of 75% Using the VGGNet-16 model. Figure 7 shows the new data image of testing data. The 40 images have not been used for training. Conclusion Based on the analysis of the results of the identification of the freshness level of Lutjanus campechanus using the CNN LeNet-5 and VGGNet-16 methods, researchers can conclude from the research results: The procedure for building this classification system involves several process stages, starting from Image acquisition, cutting, and data augmentation. The first stages are carried out to obtain input data for the classification system, and then the classification process is training and testing. The results of the training comparison of the 2 LeNet-5 and VGGNet-16 architectures with the highest accuracy value on LeNet-5 reached 95.78% using epoch 100, batch size 35, learning rate 0.0001, and the VGGNet-16 architecture with an accuracy value of 98.40% using epoch 100, batch size 35, learning rate 0.0001 So, the highest accuracy value is obtained on the VGGNet-16 architecture. Test results comparison of 2
4,242
2022-02-28T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Two Conformations of Archaeal Ssh10b The DNA-binding protein Ssh10b from the hyperthermophilic archaeon Sulfolobus shibatae is a member of the Sac10b family, which has been speculated to be involved in the organization of the chromosomal DNA in Archaea. Ssh10b affects the DNA topology in a temperature dependent fashion that has not been reported for any other DNA-binding proteins. Heteronuclear NMR and site-directed mutagenesis were used to analyze the structural basis of the temperature-dependent Ssh10b-DNA interaction. The data analysis indicates that two forms of Ssh10b homodimers co-exist in solution, and the slow cis-trans isomerization of the Leu61-Pro62 peptide bond is the key factor responsible for the conformational heterogeneity of the Ssh10b homodimer. The T-form dimer, with the Leu61-Pro62 bond in the trans conformation, dominates at higher temperature, whereas population of the C-form dimer, with the bond in the cis conformation, increases on decreasing the temperature. The two forms of the Ssh10b dimer show the same DNA binding site but have different conformational features that are responsible for the temperature-dependent nature of the Ssh10b-DNA interaction. Proteins of the Sac10b family are highly conserved among thermophilic and hyperthermophilic Archaea, and homologous sequences have also been identified in eukaryal proteins from higher plants, protists, and vertebrates (1)(2)(3). The members of this family have been postulated to play a role in chromosomal organization in Archaea since the initial isolation of Sac10b from the hyperthermophile Sulfolobus acidocaldarius in the mid-1980s. Sac10b exists as a dimer of two 10-kDa subunits in solution and binds to DNA nonspecifically (4,5). Electron microscopic studies have shown that Sac10b binds to DNA cooperatively and forms different protein-DNA complexes depending on protein/DNA ratios but does not induce DNA supercoiling or compact DNA (6). Recently, Bell et al. (2) discovered that Alba (also named Sso10b, a member of Sac10b family) forms a specific complex with a Sir2 homolog in Sulfolobus solfataricus cell extracts. They found that Sir2 in the presence of NAD ϩ can regulate the DNA binding affinity of Alba by deacetylation of Lys 16 of the protein. More recently, the crystal structure of Alba has been solved. Interestingly, the protein shares structural homology to the C-terminal domain of the Escherichia coli translation factor IF3 and the N-terminal DNA binding domain of DNase I. A model for the Alba-DNA interaction has been proposed (7). Ssh10b, another member of the Sac10b family, was isolated from Sulfolobus shibatae (1). The protein is highly abundant and basic and binds double-stranded DNA without apparent sequence specificity. Gel retardation assays have shown that Ssh10b has two modes of DNA binding with distinctively different binding densities. In the low binding density mode, Ssh10b exhibits a binding size of ϳ12 bp of DNA, whereas in the high binding density mode, the protein appears to bind shorter stretches of DNA. Interestingly, Ssh10b affects DNA topology in a temperature-dependent fashion; it is capable of significantly constraining DNA in negative supercoils at temperatures higher than 318 K, but this ability is drastically reduced at 298 K (1). A previous NMR study revealed the co-existence of two forms of Ssh10b dimers at temperatures between 283 and 320 K, with one dominating at lower temperatures and the other at higher temperatures (8). However, the structural basis for the conformational heterogeneity of the Ssh10b dimer and the temperature dependence of the interaction of Ssh10b with DNA remained to be clarified. In the present study, we investigated the heterogeneous conformations of Ssh10b and the structural factors influencing the interaction of Ssh10b with DNA by heteronuclear NMR spectroscopy. We found that the cis-trans isomerization of the Leu 61 -Pro 62 peptide bond of Ssh10b is the primary determinant of the conformational heterogeneity of the Ssh10b dimer. We also found that the equilibrium between the cis-and transforms of Ssh10b is sensitive to temperature. Our data suggest that the effect of temperature on the capacity of the protein to constrain negative DNA supercoils is related to the temperaturedependent conversion between the two Ssh10b conformations. EXPERIMENTAL PROCEDURES Expression and Purification of Ssh10b and Its Mutants-Ssh10b was produced from a synthetic gene with codon usage optimized for expression in E. coli. The gene was created from 12 overlapping oligonucleotides primers that were ligated and then cloned into the EcoRI and BamHI sites of vector pBV220. The genes of ⌬8 (deletion of the Nterminal eight residues), ⌬8P18A (Pro 18 replaced by Ala of the ⌬8 mutant), and P62A (Pro 62 replaced by Ala) mutants of Ssh10b were obtained by primer-directed mutagenesis. Each gene was cloned into expression vector pET11c, and the products were used to transform E. coli BL21(DE3) cells. The transformed cultures were grown at 37°C in 1 liter of LB broth containing 50 mg/liter ampicillin until A 600 ϭ 0.8 -1.0, and expression was induced for 2 h by adding isopropyl-1-thio-␤-D-galactopyranoside to a final concentration of 1 mM. The harvested cells were resuspended in 20 ml of buffer containing 30 mM potassium phosphate, pH 6.6, 0.1 mM EDTA, and 1 mM phenylmethylsulfonyl fluoride and sonicated. The lysate was centrifuged at 150,000 ϫ g for 2.5 h at 4°C and then the supernatant was heated for 20 min at 80°C to precipitate the E. coli proteins. After centrifugation, the supernatant was applied to a Resource-S column. Bound proteins were eluted with a 50-ml KCl gradient (0 to 0.75 M). Fractions containing Ssh10b proteins were pooled, dialyzed against distilled de-ionized water, and finally lyophilized. The purity of the proteins was confirmed by SDS-PAGE to be more than 95% and by matrix-assisted laser desorption ionization time-of-flight to be free of nucleic acid contaminants. NMR Sample Preparation-15 N or 13 C singly labeled and 15 N/ 13 C doubly labeled Ssh10b proteins were expressed in E. coli strain BL21(DE3) grown in M9 minimal medium using 15 NH 4 Cl and/or 13 Cglucose as the sole nitrogen and carbon sources. All protein samples for NMR measurements were dissolved in 500 l of 90% H 2 O/10% D 2 O containing 20 mM deuterated acetate buffer, pH 4.8, 50 l of NaN 3 , 1 M 2,2-dimethyl-2-silapentanesulfonic acid and 20 mM KCl to a final protein concentration of about 1 mM, unless otherwise indicated. The sample for determination of the dimer interface of Ssh10b was 15 N/ 13 C asymmetrically labeled. NMR Spectroscopy-All NMR experiments were carried out on a Bruker DMX 600 spectrometer equipped with a triple resonance probe and an actively shielded three-axis gradient unit. The experimental temperature was set to 310 K except for the temperature-dependent experiments. 1 H chemical shifts were referenced to the internal standard 2,2-dimethyl-2-silapentanesulfonic acid at 0 ppm. 15 N and 13 C chemical shifts were calculated indirectly using the corresponding consensus ⌶ ratios (9). Although the assignment of the T-form Ssh10b has already been published (8), assignment of the remaining resolved resonances of the C-form Ssh10b was achieved by further exploring existing spectra: 3D 1 1 H-13 C-15 N HNCA, HN(CA)CO, CBCA(CO)NH, HNCACB, and HNCO experiments for backbone assignments and HBHA(CBCA)NH, HB-HA(CBCACO)NH, and CC(CO)NH, as well as 3D 1 H-13 C HCCH-TOCSY experiments for side chain assignments. Most of the backbone assignments of [P62A]Ssh10b was obtained by comparison with Ssh10b; the remaining ambiguities were resolved with an HNCA experiment. A 3D NOESY-1 H, 13 C-HMQC experiment was carried out to distinguish the cis and trans conformations of the X-prolyl bond of Ssh10b. Conformational exchange of Ssh10b was monitored using a series of 2D 1 H-15 N correlation experiments for simultaneous measurements of 15 N longitudinal decay and chemical exchange rates at exchange delays of 12, 52, 152, 302, 452, 552, 652, 902, 1102, 1302, and 1602 ms (10). The temperature dependence of 2D 1 H-15 N HSQC spectra of Ssh10b was measured at temperatures ranging from 283 to 330 K at increments of 3.5 K. To determine residues involved in the dimer interface, a 2D version of the four-dimensional 1 H, 13 C-HMQC-1 H, 1 H-NOESY-1 H, 15 N-HSQC experiment (11) was carried out on the 15 N/ 13 C asymmetrically labeled Ssh10b sample, and 15 N was allowed to evolve in the indirect dimension. The mixing time of the experiment was 150 ms. 2D intensity modulated HSQC (12) was used to measure the 3 J NH␣ coupling constants on 15 N singly labeled proteins. The delay for 3 J coupling evolution was set to 30 ms. 2D 1 H-15 N HSQC experiments were also used for exploring the binding behavior of Ssh10b with synthesized 16-bp double-strand DNA fragments (5Ј-GGCAGA-CGCGTCTGCC). All NMR data were processed and analyzed using Felix 98 (Accelrys Inc.). The data points in each indirect dimension were usually doubled by linear prediction and zero-filled. A 90 to 60°shifted square sine bell apodization was used for all dimensions prior to Fourier transformation. Nick Closure Assay-The single-nick plasmid pUC18 was prepared as described previously (14). The nicked plasmid (1 g) was incubated with Ssh10b or [P62A]Ssh10b at various mass ratios for 5 min at temperatures of 298 or 330 K. The ligating reactions were then per-formed as described previously (15). T4 DNA ligase (3 Weiss units) and Pfu DNA ligase (4 Weiss units) were used for the reactions carried out at 298 and 330 K, respectively. After the ligation reaction, the samples were analyzed by 1.4% agarose electrophoresis in 0.5ϫ Tris-Phosphate-EDTA (13). Co-existence of Two Ssh10b Homodimers with Different Conformations-As shown in the previous study, two sets of crosspeaks (for simplicity denoted as "doublets" hereafter) are observed for most residues of Ssh10b in the 2D 1 H-15 N HSQC spectrum ( Fig. 1A) (8). The signal intensities of the resonance doublets are not equal, with one signal stronger than the other in all doublets. Although the results of chemical cross-linking (1) and the related crystal structure (7) suggest that Ssh10b is a dimer, the resonance doublets with unequal signal intensities ( (7), the ⌬␦ values make it clear that the Ssh10b molecule does not exist as a mixture of dimeric and monomeric forms in solution. If this were the case, then the resonance doublets should be observed only for residues located at the dimer interface. The cross-peaks shown in Fig. 3 correspond to the residues at the Ssh10b dimer interface, detected by X-nucleus edited NOESY. When mapped to the sequence, the data indicate that helix ␣ 2 , a portion near the C-terminal of strand ␤ 3 , and the N-terminal part of strand ␤ 4 are involved in the dimeric surface (Fig. 4A). However, ⌬␦ values ( Fig. 2) reveal that the residues of the N-terminal part of strand ␤ 4 give only "singlet" signals, whereas residues in helix ␣ 1 and strands ␤ 1 and ␤ 2 , which are distant from the interface, generate doublet signals (Fig. 4B). The line widths for each pair of doublets are the same. In addition, the ratios of the signal intensities of the doublets are independent of the concentrations (0.05-1.5 mM) of Ssh10b as revealed by NMR experiments (data not shown). These therefore also exclude the possibility of an oligomerization equilibrium. Because Ssh10b is dimeric as confirmed by size-exclusion chromatography (data not shown), it was considered whether two forms of Ssh10b dimer might co-exist in solution. The form with higher signal intensity in doublets was assigned as the T-form and the other with lower signal intensity as the C-form. No "multiplets" other than doublets were observed for most residues of Ssh10b (Fig. 1A), consistent with both the T-form and the C-form as homodimers, with the monomeric subunits arranged symmetrically in each dimer. The main-chain torsion angle is closely related to the backbone conformation of proteins and can be calculated from the 3 J NH␣ scalar coupling constants. The J-coupling constants of Ssh10b were measured by 2D intensity modulated HSQC (12). The differences between the 3 J NH␣ value for the T-form and for the C-form (⌬ 3 J NH␣ ) at each residue position is shown in Fig. 2. Residues with an absolute ⌬ 3 J NH␣ value greater than 1 Hz are found in segments spanning the whole molecule: the N-terminal region, the loop linking strand ␤ 1 and helix ␣ 1 , helices ␣ 1 and ␣ 2 , and the C-terminal of strand ␤ 4 . This suggested that the main-chain conformations are different in the two forms of the Ssh10b dimer. Cis and Trans Conformations of Ssh10 - Fig. 5 shows portions of a 1 H-15 N heteronuclear chemical exchange spectrum at an exchange delay of 1.3 s (10). Two categories of doublets, classified by the exchange features, were observed for Ssh10b. Residues Thr 5 , Thr 7 , and Ser 9 gave exchange cross-peaks (Fig. 5A) when the exchange delays were set in the range of 0.1 to 1.6s. However, the remaining doublets shown in Fig. 1A did not give any exchange cross-peaks at the same exchange delays but gave a result like that shown in Fig. 5B for residue Lys 97 . Thr 5 -Pro 6 -Thr 7 -Pro 8 -Ser 9 is an unstructured N-terminal segment of Ssh10b as determined by the chemical shift index (8). The appearances of the exchange cross-peaks of Thr 5 , Thr 7 , and Ser 9 are clearly because of the cis-trans isomerization of the X-prolyl bonds in this segment. Thr 7 is between Pro 6 and Pro 8 and therefore showed two minor peaks in Fig. 1A and four exchange cross-peaks in Fig. 5A. The intensity ratios of the major auto-peaks to the minor ones were about 11:1 at 310 K, a value much higher than that for the remaining doublets in Fig. 1A. Therefore, the remaining doublets were not caused by the cis-trans isomerization of Thr 5 -Pro 6 or Thr 7 -Pro 8 peptide bonds. This was confirmed by the 2D 1 H-15 N HSQC spectrum of [⌬8]Ssh10b (spectrum not shown), in which all resonance doublets remained the same as those in Fig. 1A, except for the absence of the cross-peaks for residues Gly 4 , Thr 5 , Thr 7 , Ser 9 , Met 10 , and Val 11 Fig. 1B. Only a single set of cross-peaks was observed for all residues except the residues Gly 4 , Thr 5 , Thr 7 , Ser 9 , and Asn 10 of [P62A]Ssh10b. Substitution of Pro 62 by Ala 62 eliminated the cis-trans isomerization, so that the mutant Ssh10b dimer was found in a single conformational state. Overlay of Fig. 1, B and A shows that the crosspeaks of [P62A]Ssh10b can be mapped to the cross-peaks of the T-form of Ssh10b. This indicates that the T-form of the Ssh10b homodimer adopts the same conformation as that of the [P62A]Ssh10b homodimer. Fig. 6A shows 1 H-1 H slices at the 13 C ␣ frequencies of Leu 61 and Pro-62 extracted from the 3D NOESY-1 H, 13 C-HMQC (lower strip) and the 3D HCCH-TOCSY (upper strip) spectra for the T-form Ssh10b dimer, whereas Fig. 6B shows those for the C-form Ssh10b dimer. Two inter-residue NOE cross-peaks between 1 H ␣ of Leu 61 and 1 H ␦1 and 1 H ␦2 of Pro 62 could be observed in the 3D NOESY-HMQC strip for the T-form Ssh10b dimer (Fig. 6A). The appearance of these two NOEs characterizes the trans conformation of the Leu 61 -Pro 62 peptide bond in the T-form Ssh10b dimer. In the strips corresponding to the C-form Ssh10b dimer (Fig. 6B), only one NOE cross-peak between the 1 (Fig. 7). The ratio of C-form to T-form was around 0.26 at 318 K and greater than 0.6 below 298 K. Clearly, the T-form of the Ssh10b dimer is dominant, although the population of the C-form increases on decreasing the temperature. The 1 H N chemical shifts of all resolved cross-peaks in the 2D 1 H-15 N HSQC spectra were measured at different temperatures for the T-form and the C-form Ssh10b dimers. The differences between the 1 H N chemical shifts at 298 K (␦ 298 ) and at 318 K (␦ 318 ) for the T-form (⌬␦ T ϭ ␦ T298 Ϫ ␦ T318 ) and the C-form (⌬␦ C ϭ ␦ C298 Ϫ ␦ C318 ) of a Ssh10b sample containing a high concentration of salt (200 mM KCl), and ⌬␦ T ϭ ␦ T300 Ϫ ␦ T320 and ⌬␦ C ϭ ␦ C300 Ϫ ␦ C320 of a Ssh10b sample containing a low concentration of salt (20 mM KCl), were obtained (data not shown). On increasing the temperature, the 1 H N chemical shifts of the majority of the cross-peaks were shifted upfield (ϩ⌬␦) for both the T-form and the C-form Ssh10b dimers. However, this was not the case for residues Ala 25 , Leu 48 , Val 53 , Arg 57 , and Leu 61 of the C-form of Ssh10b in the sample containing 20 mM KCl and for residues Asn 58 and Asp 63 of the C-form of Ssh10b in the sample containing 200 mM KCl, which all showed downfield shifts (Ϫ⌬␦) on increasing the temperature. The extent of upfield movements of the 1 H N chemical shifts (ϩ⌬␦) varied for the residues in the Ssh10b dimer that generated resonance doublets. The differences between the changes in the 1 H N chemical shifts of the resonances for the T-form and for the C-form of Ssh10b (⌬⌬␦ T-C ϭ ⌬␦ T Ϫ ⌬␦ C ), obtained from the data of the Ssh10b sample containing 200 mM KCl at temperature of 298 and 318 K, are shown in Fig. 2 and also mapped onto 3D structure (Fig. 4C). For residues Tyr 22 , Val 23 , Ala 25 , Ala 26 , and Leu 27 , located in helix ␣ 1 , and residues Lys 48 , Asp 51 , Val 53 , Glu 54 , Arg 57 , and Asn 58 , located in helix ␣ 2 , the ⌬⌬␦ T-C values are larger than ϩ0.01 ppm. Residues Val 34 , located in the turn between helix ␣ 1 and strand ␤ 2 , and Asp 63 , in the loop linking helix ␣ 2 and strand ␤ 3 , also showed ϩ⌬⌬␦ T-C (Ͼ0.01 ppm). However, negative ⌬⌬␦ T-C with absolute values larger than 0.01 ppm were observed for residues Ser 35 and Ile 37 at the N terminus of strand ␤ 2 , residues Lys 64 , Glu 66 , Gly 73 , Ser 74 , and Gln 75 in strand ␤ 3 , and residues Ile 92 , Ile 94 , Arg 95 , Lys 96 , and Lys 97 at the C terminus of strand ␤ 4 (Fig. 2). Thus, the upfield shifts of the 1 H N resonances of the residues in helices ␣ 1 and ␣ 2 of the T-form were larger than those of the The first region includes helix ␣ 1 and the loop linking strand ␤ 1 and helix ␣ 1 , the second one is the C-terminal of strand ␤ 2 and helix ␣ 2 , and the third one consists of the C terminus and the N terminus of strands ␤ 3 and ␤ 4 , respectively, and the ␤-turn between them (Fig. 2). ⌬␦ T (DNA) 298 , and ⌬␦ C (DNA) 298 (data not shown) showed similar histograms to that of ⌬␦ T (DNA) 318 and ⌬␦ C (DNA) 318 . ⌬␦(DNA) values provided information about the location of DNA binding sites and the local conformational changes of the Ssh10b dimer induced by binding of DNA. Compare the values of ⌬␦ T (DNA) 318 and ⌬␦ T (DNA) 298 with ⌬␦ C (DNA) 318 and ⌬␦ C (DNA) 298 , respectively, and the differences in the 1 H N chemical shift perturbations by DNA binding between two forms of Ssh10b dimer can be noticed. The residues showing the observable differences are indicated in the 3D structure (Fig. 4D). The differences were found mainly in two helices and in three ␤ strands. However, upon interaction with DNA, 1 H N resonances of the residues in the segments, Ala 41 -Ser 47 and Val 76 -Ile 90 , still remain as singlet. It seems that DNA binding produces similar conformational changes to these two polypeptide segments in the two forms of the Ssh10b molecule, although different changes in the 1 H N resonances between the two forms of the Ssh10b dimer are observed in other regions of the polypeptide chain of the molecule. Temperature-dependent Interaction of DNA with Ssh10b and [P62A]Ssh10b-The interaction of DNA with the Ssh10b dimer has been found to be temperature-dependent (1). To further investigate the temperature-dependent features of DNA binding, EMSA and nick closure assays were performed on both Ssh10b and the P62A-mutant Ssh10b ([P62A]Ssh10b) under identical experimental conditions. In the EMSA assay, a 32 P-labeled 108-bp dsDNA fragment (0.5-1 ng) was mixed with different amounts of Ssh10b or [P62A]Ssh10b and analyzed by gel electrophoresis at 293 and 320 K. The bands show the distribution of the products of the DNAprotein interaction by complex size (Fig. 8). The apparent K d was Fig. 8). However, when the protein concentration was higher, such as 0.16 or 0.32 M, the nature of cooperative binding of Ssh10b to DNA makes the migration patterns of the DNA-Ssh10b complexes at 320 K totally different from those at 293 K, but resemble those of the DNA-[P62A]Ssh10b complexes at both temperatures (lanes 3 and 4 in Fig. 8). Therefore, the mode of binding of the Ssh10b dimer to DNA in the complex at 293 K must be different from that at 320 K or in the DNA-[P62A]Ssh10b complex at either temperature. Nick closure assays were carried out to detect the capabilities of the proteins to constrain DNA in supercoils at 298 or 330 K. In the nick closure assay, a single-nick plasmid pUC18 was ligated in the absence and presence of the proteins. The results for different protein:DNA ratios is shown in Fig. 9. The assay in the absence of protein (lane 1 in Fig. 9) was performed as a control. At 298 K, addition of Ssh10b to the reaction mixture produced only a weak CCC (covalently closed circular plasmid) band with a high supercoil density at a protein:DNA mass ratio of 2:1 (Fig. 9, upper left panel, lane 4). However, at 330 K the ability of the bound Ssh10b to introduce supercoils into the plasmid increases dramatically. [P62A]Ssh10b showed similar ability to introduce supercoils into the plasmid at 330 K. However, unlike bound Ssh10b at 298 K, [P62A]Ssh10b was capable of introducing supercoils into the plasmid over the whole range of the protein concentrations at 298 K (Fig. 9, lower left panel). Clearly, the abilities of Sshb10b and [P62A]Ssh10b to affect the 4). The gel of Ssh10b at 320 K was slightly under-exposed to x-ray film, and that of [P62A]Ssh10b at 320 K was over-exposed. topology of DNA are similar at high temperature (330 K) but different at low temperature (298 K). The results of EMSA and nick closure assays indicate that [P62A]Ssh10b, existing in a single trans conformation in solution, shows the same features upon interaction with DNA at both high and low temperatures. Therefore, the temperaturedependent nature of the interaction of Ssh10b with DNA correlates with the two conformations of the Ssh10b dimer. Different Conformational Features of the T-form and C-form Ssh10b Homodimer-The conformational features of a protein are strongly influenced by factors that affect the strength of intramolecular hydrogen bonds. Changes in the chemical shifts of 1 H N resonance are a sensitive indicator of changes in the strength of hydrogen bonding. The chemical shifts of 1 H N resonances are affected by hydrogen bond acceptors, particularly carbonyl groups (20,21). Wagner and co-workers (22,23) demonstrated that 1 H N chemical shifts depend on the inverse third power of the distance between the 1 H N and the hydrogen bond acceptor. In the case of hydrogen bonding with CϭO, large downfield shifts are observed for strongly hydrogen-bonded amide protons. Within protein secondary structure, the 1 H N of the ith residue forms a hydrogen bond with the CϭO of the (i-4)th residue in an ␣-helix, and in an antiparallel ␤-sheet, the 1 H N and the CϭO of a residue in one ␤ strand forms hydrogen bonds with the CϭO and the 1 H N of a residue in the opposite strand. When the temperature is increased, the thermal fluctuations of an ␣-helix or a ␤-sheet are enlarged, and the average distance between the 1 H N and the CϭO increases. As a consequence of weakened hydrogen bonding, the chemical shifts of the 1 H N resonances will tend to move upfield. The 1 H N resonances of almost all cross-peaks in the 2D 1 H-15 N HSQC spectrum of the Ssh10b molecule were shifted upfield on increasing the temperature. This is thus consistent with a general weakening of the hydrogen bonding in both the T-form and the C-form of the Ssh10b dimer on increasing the temperature. However, the temperature-dependent shifts of the 1 H N resonances were different in size for the T-form (⌬␦ T ) and the C-form (⌬␦ C ). This then suggests a difference between the hydrogen bonding strengths within the secondary structure of the two forms of the Ssh10b dimer. The values of ⌬⌬␦ T-C ϭ ⌬␦ T Ϫ ⌬␦ C for each residue are shown in Fig. 2. On increasing the temperature, the residues in helices ␣ 1 and ␣ 2 show larger upfield shifts of the 1 H N resonances for the T-form (ϩ⌬⌬␦ T-C ), whereas the residues involved in the antiparallel ␤-sheet show larger upfield shifts for the C-form (Ϫ⌬⌬␦ T-C ) (Fig. 2). In the antiparallel ␤-sheet formed by ␤ 2 -␤ 4 -␤ 3 strands, the 1 H N and CϭO groups of residues Ile 94 , Arg 95 , Lys 96 , and Lys 97 form hydrogen bonds with CϭO and 1 H N of Ile 37 , Glu 66 , Ser 35 , and Lys 64 , respectively. Residues Ile 94 , Arg 95 , Lys 96 , and Lys 97 are located in the C-terminal of strand ␤ 4 . Residues Ile 37 and Ser 35 are located in the N-terminal region of strand ␤ 2 , and Glu 66 and Lys 64 are located in the N-terminal region of strand ␤ 3 . Thus, on increasing the temperature, the lengthening of the hydrogen bond distances in the portion of antiparallel ␤-sheet near to the C-terminal of the Ssh10b molecule is greater for the C-form than that for the T-form Ssh10b dimer. Conversely, the lengthening of the hydrogen bond distances in the ␣-helices is greater for the T-form. These results suggest that the spatial packing of the residues is tighter in the ␣-helices and looser in the portion of the antiparallel ␤-sheet near the C-terminal of the molecule for the C-form than that for the T-form of Ssh10b molecule. Further support for different temperature-dependent features of the secondary structure of the T-form and the C-form of the Ssh10b dimer was from the observation that for a few 1 H N resonances, the direction of the temperature-dependent shift is different in the two forms. The 1 H N resonances of residues Ala 25 , Leu 48 , Val 53 , Arg 57 , and Leu 61 for the sample containing 20 mM KCl, and of residues Asn 58 and Asp 63 for the sample containing 200 mM KCl, shifted downfield for the C-form and upfield for the T-form of Ssh10b molecule. Residues Ala 25 , Leu 48 , Val 53 , Arg 57 , and Asn 58 are located in ␣-helices. Leu 61 and Asp 63 are the nearest neighbors of Pro 62 , located in the loop linking helix ␣ 2 and strand ␤ 3 . The magnitudes of downfield shifts of the 1 H N resonances of these residues upon increasing temperature were in the range 0.6ϳ2.8 ppb/K. In addition, the ⌬ 3 J NH␣ values for residues Leu 24 , Leu 27 , Lys 48 , Val 53 , and Glu 54 were all greater than Ϯ1 Hz (Fig. 2), corresponding to a change in backbone torsion angle, ⌬ Ͼ Ϯ10°, between the two forms of Ssh10b molecule. Residues Ala 25 and Asn 58 also showed small variations of ⌬ 3 J NH␣ (Fig. 2). Thus, differences in the hydrogen bonding strengths in the secondary structure of the two forms correlate with main-chain conformational differences between the T-form and the C-form Ssh10b molecule. DNA Binding Sites on the T-form and the C-form ssh10b Dimers-Proteins bind in the major or minor grooves of DNA and some protein structures have contacts with DNA in both grooves simultaneously (24). In the model proposed for the DNA-Alba complex (7), the Alba dimer interacts simultaneously with a major groove and the two flanking minor grooves of the DNA. Residues Lys 16 , Lys 17 , and Arg 42 in the central "belly" of Alba dimer are involved in DNA binding at the major groove, and the ␤-hairpin of Alba interacts with the minor grooves. Ssh10b lacks the first three N-terminal amino acid residues of Sso10b but is otherwise identical in sequence. In fact, the protein used to solve Alba crystal structure is identical with Ssh10b (7). Thus, Ssh10b is supposed to have the same DNA binding sites shown by the DNA-Alba complex. Examination of the interaction of the Ssh10b dimer with DNA in solution by NMR spectroscopy revealed the location of the DNA binding sites on both the T-form and the C-form Ssh10b molecule. The DNA binding regions of the
6,878.2
2003-12-19T00:00:00.000
[ "Biology", "Chemistry" ]
Agricultural Sprinkler for Irrigation System In this review paper, the need to irrigate farms or gardens by a way which will replace the natural rainfall when not available led to the planning and construction of the sprinkler irrigation system. During this study, different types of sprinkler irrigation system was taken into study with their design, construction and installation. The planning was supported employing a rotating system to irrigate a little size plot which provides a suitable scientific basis for correct water scheduling, evaluation of the system and minimizing water wastage and runoff. It had been designed for various crops. The importance of the designing and installation is to equip the irrigation research field of the University with irrigation field demonstration practice facilities that could be used. Keywords— Agricultural, Crops, Construction, Design, Irrigation, INTRODUCTION Irrigation is a man-made method of water application to soil to reinforce the assembly of crops. Irrigation water is supplied to supplement the water available from rainfall, soil moisture and therefore the capillary rise of spring water. In many areas of the planet, the quantity of rainfall isn't capable meet the moisture requirements of crops. Hence, successful crop production is very much requiring adequate provision for irrigation (Benami & Ofen, 1983;Jensen, 1980;Michael, 2005). consistent with Sharma and Sharma, (2004), the role of irrigation systems is often categorized as direct and indirect benefits. [1] The direct benefits include; increase in crop production output through higher yield to achieve selfsufficiency in food, cultivation of money crops, land value appreciates manifold which makes wealthy the land holders and domestic water system to towns and villages (like within the cities like Delhi, Jaipur, Bikaner and Chandigarh depend upon canal water for public water supply in INDIA), hydropower generation at dam site and canal falls. and therefore the indirect benefits are; increase in gross domestic product of the country, increase in revenue from nuisance tax on food grains, increase employed, retards migration to cities for livelihood, farm laborers get higher wages, creation of more jobs/incomes and rise to whole array of agro-based industries. Sprinkler irrigation is an important improvement over conventional surface irrigation. It stimulates natural rainfall by spreading water within the sort of rain uniformly over the land surface when needed at required quantity during a uniform pattern. Water is applied at a rate but the infiltration rate of the soil so on avoid surface runoff from irrigation. Sprinkler irrigation systems is suitable for undulating lands, lack of water availability, sandy or shallow soils and where uniform application of water. Drip irrigation is a basic and an artificial method of supplying water to the roots of the plant. it's also called micro irrigation. In past few years there's a rapid climb during this system. The user communicates with the centralized unit via SMS or text. The centralized unit communicates with the system through SMS or text which can be received by the GSM with the assistance of the SIM card. The GSM sends this particular data to ARM7 which is additionally continuously receives the information from sensors in some sort of codes. After processing the whole, this data is displayed on the LCD. [3] Thus briefly whenever the system receives the activation command from the subscriber it checks all the sector conditions and provides an in depth feedback to the user and waits for an additional activation command to start out the motor. The motor is controlled by an easy manipulation within the internal structure of the starter. The starter coil is being indirectly activated via a transistorized relay circuit. When the motor is started, a continuing monitoring on soil moisture and water level is completely done & once the soil moisture is reached to sufficient level the motor is automatically turned off & a message is send to subscriber that the motor is turned off. The water level indicator indicates three levels which are low, medium, high and also empty tank. [3] II. TYPES OF SPRINKLER 1) Small irrigation system [12] Fig. 2. Block Diagram of Small irrigation system [12] International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 http://www.ijert.org In this particular type, the temperature value is kept on change from a fixed range of values. The humidity value is also kept changing from a range of value along with the soil moisture values. The height is taken into consideration from the starting of the experiment in terms of percentage along with the diameter, so that it can conclude that the plant is healthy. It takes the colour of leaves into consideration which represents the nature of good plant. Moreover, from the value of the soil moisture it shows that it is in good condition and this small irrigation system is very good as it waters the plant with the correct amount of water. Hence, it can be said that the plant is not over watered or less watered. 2) Automatic Irrigation System [9] Fig. 3. Block diagram of Automatic Irrigation System [9] The soil moisture sensors which are nothing but copper strands are inserted within the soil. The soil sensing arrangement is used to measures the conductivity of the soil. Wet soil is going to be more conductive than dry soil. The soil sensing arrangement module features a comparator in it. The voltage from the prongs and thus the predefined voltage is compared and therefore the output of the comparator is high only the soil condition is dry. This output from the soil sensing arrangement is given to the analogue input pin of the microcontroller. The microcontroller continuously monitors the analogue input pin. When the moisture within the soil is above the sting, the microcontroller displays a message mentioning the same and thus the motor is off. When the output from the soil sensing arrangement is high i.e. the moisture of the soil is a smaller amount. this may trigger the microcontroller and displays an appropriate message on the LCD and therefore the output of the microcontroller, which is connected to the bottom of the transistor, is high. When the transistor is turned on, the relay coil gets energized and activates the motor. The LED is additionally turned on and acts as an indicator. When the moisture of the soil reaches the edge value, the output of the soil sensing arrangement is low and therefore the motor is turned off. 3) Periodic Move Sprinkler Systems [5] A periodic move system is about during a fixed location for a specified length of your time to use a required depth of water. this is often referred to as the irrigation set time. After an irrigation set, the lateral or sprinkler is moved to subsequent set position. Applications range from 50% -75%. [5] Center pivot systems contains one lateral supported by towers with one end anchored to a hard and fast pivot structure and therefore the other end continuously traveling the pivot point while applying water this system irrigates a circular field unless finish guns and swing lines square measure cycled on in corner areas to irrigate additional of a sq. field. The water is equipped from the supply to the lateral through the pivot. The lateral pipe with sprinklers is supported on drive units. The drive units' square measure, usually battery-powered by hydraulic water drives or electrical motors. Numerous operative pressures and configurations of mechanical device heads or nozzles (types and spacing) square measure placed on the lateral. Sprinkler heads with nozzles could also be high or low impact, gear driven, or one among many low spray heads. a better discharge, part circle gun is usually used at the acute end (end gun), of the lateral to irrigate the outer fringe of the lateral. Each tower which is usually mounted on rubber tires, features a power device designed to propel the system round the pivot point. the foremost common power units include motor, hydraulic water drive, and hydraulic oil drive. When feasible, agricultural operators are converting from portable sprinkler systems and travelers to put in center pivot systems. Many improvements are remodeled the years. This includes the corner arm system. Some models contain another swing lateral unit that expands to succeed in the corners of a field and retracts to a trailing position when the system is along the sector edge. When the corner unit starts, discharge flow altogether other heads is reduced. Overall field distribution uniformity is affected with the corner arm. Typically, 85% of maintenance is spent maintaining the corner arm unit itself. thanks to less than adequate maintenance in corner systems operating all the time, total field application uniformity is reduced even further. Many techniques are developed to scale back energy used, lower system flow capacities, and maximize water use efficiency. These include using Low Energy Precision Application (LEPA) and low InCanopy (LPIC) systems. LEPA systems (precision application) require adequate (implemented) soil, water and plant management. LPIC systems are used on lower value crops where localized water translocation is suitable, (30 feet before or behind the lateral position). Water is applied within the crop canopy through drop tubes fitted with low 5 -10 psi application devices near the bottom surface. Good soil and water management are required to get application efficiencies within the high 80's. LPIC systems aren't suitable to be used on low intake soils. In New Jersey most center pivot systems are low, low volume systems with spray heads or rotator heads on drops. Each sprinkler features a pressure regulator set at 10 -20 psi. With proper management, application efficiencies with center pivot systems are often 75 -90 percent counting on wind speed and direction, sprinkler type, operating pressure, and tillage practices. 5) Traveling Boom Sprinkler Systems [5] A traveling boom system is analogous to a traveling gun except several nozzles are used. These systems have higher distribution uniformity than traveling guns for an equivalent diameter of coverage. they're not as popular in NJ because the traveling gun system; however, do provide options when a grower prefers a lower volume and pressure systems to scale back the high energy costs related to a traveling gun system. The booms are often designed with low pressure and low flow nozzles that operate at higher efficiency and uniformity. The traveling boom usually is rotated by back pressure from fixed nozzles, or could also be fixed. it's typically moved by a self-contained continuously moving electromagnetic unit by dragging or coiling the water feed hose on a reel. A boom are often nearly 100 feet long with uniformly spaced nozzles that overlap (similar to a linear move lateral). 6) Smartphone based Irrigation system [11] Fig. 4. Block diagram of Smartphone based Irrigation System [11] Raspberry Pi is that the heart of the general existing system. The Raspberry Pi 3 incorporates variety of enhancements and new features. Improved power consumption, enlarged connectivity and greater IO are among the improvements to the present powerful, small and light-weight GPIO (General Purpose Input Output) pins. The Raspberry Pi cannot directly drive the relay. it's only zero volts or 3.3 V. we'd like 12V to drive the electromechanical relay. therein case, we'd like a driver circuit. the driving force circuits takes the low level input and provides the 12V. We are using here 2 relay to modify on Water motor. Soil moisture sensor, humidity sensor, temperature detection sensors are connected to Raspberry Pi board through Arduino. If the soil moisture value is low the moisture level and humidity is low at the given value and also if temperature is high then the water motors are going to be on, whereas if the moisture level, humidity is high and temperature is low the motor are going to be off throughout the relay. the appliance will have a GUI which can show all the info to user. The modes as specified are often selected by the user on the app itself. Fig. 5. Different Methods and their Features [13] III. SOME RULES FOR SYSTEM DESIGN 1) Main land should be at downhill. 2) Lateral should be laid across the slope or nearly on the contour. 3) For multiple lateral operation, lateral pipe sizes shouldn't be quite two diameter 4) water system source should be nearest to the middle of the world Layout should facilitate and minimize lateral movement during the season 5) Booster pump should be considered where small portion of field would require high at the pump 6) Layout should be modified to use different rates and amounts of water where soils are greatly different within the design area Fig. 6. Layout of the irrigation system design [6] IV. IRRIGATION SYSTEM PLANNING AND DESIGN CONSIDERATION Proper system planning and style is important to Irrigation Water Management (IWM) and requires the thoughtful consideration of the many elements. Selecting a system must include the subsequent major items: Management, water, soil, and crops. [13] a. Management: The irrigator and planner got to collaborate so as to develop the simplest plan. The discussion of desired system type must include an understanding of management, operation, and maintenance requirements. [13] b. Water: The source, whether surface or ground, and therefore the quantity, quality, availability, and flow, are needed to work out the sort of system that's appropriate. Most sources of spring water require power, regardless of which sort of system is planned. With micro irrigation, a spring water source might only need an inline screen to wash the water while a surface water source would require a classy filtration system. Some sources, thanks to high salinity (EC), might not be suitable for sprinkler irrigation. A micro irrigation system works best with a continuing source while a surface system can operate an extended interval between water applications. A surface system is definitely requiring in a comparatively high flow for many efficient application, while sprinkler or micro irrigation systems can function well at a lower rate of application. [13] c. Soil: Many soil qualities are important when planning of an irrigation system is taken into consideration. Soil texture may be a good indicator of water holding capacity (whc), permeability, and transmissivity. Whc is especially important when considering a surface system, thanks to intervals between irrigations. Permeability plays an important role in surface system design, and to a lesser extent, sprinklers. Transmissivity, the power of water to maneuver through the soil, is vital when considering some extent source of irrigation, like with drip emitters. The water must be ready to enter and thru the basis zone. d. Crops: Selection of crops to be grown are often limited thanks to water quality and quantity. High salinity can cause yield reduction and even failure, depending upon the crop planted. Other important considerations include season and site: 1. Season: The length of season is vital for crop selection and is also important for justifying the expense for any system planned. Location: System structures and hardware must be ready to withstand climate extremes of temperature, humidity, precipitation, or wind. Proximity to wildlife, cattle, and humans also suggest necessary precautions to think about. V. SYSTEM OPERATION There are several ways to work the various sorts of travelling irrigation machines. The cable drawn travelling irrigator are going to be used as an example for instance the way travelers are operated. this technique possesses a trailer carrying the gun sprinkler. The trailer is additionally equipped with a water powered winch and a cable. The winch is then driven by water pressure from the pumping unit. Whereas, the gun sprinkler is supplied water from the pump, via a mainline, which has hydrants onto which the hose is connected. [2] The following procedure is the way such a system is operated, step by step: [2] 1. The tractor, hose reel and irrigating unit are harnessed therein order along the tow-path 2. The cable is anchored at one end of the sector 3. Then the tractor is driven to the hydrant 4. Furthermore, the hose is then connected to the hydrant 5. Unwinding of the cable and hose are then done by driving the tractor to the opposite end of the sector. 6. subsequent step is to disconnect the hose 7. The hose reel and therefore the irrigating unit are then brought back to the primary position 8. The hose is attached to the irrigating unit and therefore the unit is additionally detached from the hose reel 9. As, the hose reel trailer is then driven to the position where the cable is anchored 10. The pump house should then be started. The irrigating unit will then start to work, because it irrigates, it winds the cable on the winch and within the process moreover, it pulls itself on the cable. Once it reaches the pre-determined distance on the brink of the opposite end, it automatically stops moving and irrigating. If this particular standing time is allowed for, it stops moving but continues to irrigate during the standing time. [2] The following procedure should be followed when changing position to subsequent tow-path. [2] 1. The hose is disconnected from the hydrant also as from the irrigating unit 2. The hose is then further connected to the hose reel 3. The tractor should be connected to the hose reel before draining the hose by atmospheric pressure from a compressor 4. The hose is then rewound and therefore the equipment is moved to subsequent tow-path. CONCLUSION This review paper describes the basics of sprinkler irrigation, the performance of sprinkler systems including uniformity and efficiency of application, types, and characteristics of sprinkler systems currently used, and style and management procedures for specific sorts of sprinkler systems. Information is provided to reinforce the planning and management of sprinkler systems which are the foremost rapidly growing sort of irrigation today. The systems provide several benefits and it can operate with less manpower. The above mentioned systems supplies water on the humidity within the soil goes below the reference. Thanks to the direct transfer of water to the basis conservation takes place and also helps to take care of the moisture to soil ratio at the root zone constant to some extent. Thus the system is efficient and compatible to changing environment. The concept in future are often enhanced by adopting DTMF technology. This above mentioned methods are essentially hooked in to the output of the sensing arrangement. Whenever there's need of excess water within the desired field then it'll not be possible by using sensing arrangement technology. For this we'll need to adopt the DTMF technology. By using this we'll be ready to irrigate the specified field in desired amount.
4,299.4
2020-05-11T00:00:00.000
[ "Agricultural and Food Sciences", "Engineering" ]
Constraining top quark effective theory in the LHC Run II era We perform an up-to-date global fit of top quark effective theory to experimental data from the Tevatron, and from LHC Runs I and II. Experimental data includes total cross-sections up to 13 TeV, as well as differential distributions, for both single top and pair production. We also include the top quark width, charge asymmetries, and polarisation information from top decay products. We present bounds on the coefficients of dimension six operators, and examine the interplay between inclusive and differential measurements, and Tevatron / LHC data. All results are currently in good agreement with the Standard Model. Introduction One of the primary goals of the Large Hadron Collider (LHC) is to uncover the precise mechanism responsible for electroweak symmetry breaking. Going beyond its ad hoc implementation in the Standard Model (SM), most realisations of this mechanism predict that new, possibly non-resonant physics will appear at the (multi-)TeV scale. Faced with the large number of such scenarios, and the frequent degeneracy in their experimental signatures, it has become customary to parametrize deviations of LHC measurements from their Standard Model predictions in terms of model-independent parameters, where possible. In Higgs production, for instance, the deviations in early inclusive cross-section measurements are described by 'signal strength' ratios. Likewise, deviations in electroweak parameters are often expressed in the language of anomalous couplings. With the LHC Run I at a close, the main message to be drawn is that, apart from a few scattered anomalies, all measurements are in agreement with Standard Model predictions. This suggests that the new degrees of freedom, if they exist at all, are separated in mass [1,2] from the Standard Model fields * . If this is true, the new physics can be modelled by an infinite series of higher-dimensional effective operators [4][5][6][7]. From a phenomenological perspective, these have the advantage over simple signal strengths in that they can also accommodate differential measurements and angular observables, since the operators lead to new vertex structures which modify event kinematics. They are also preferable to anomalous couplings since they preserve the Standard Model SU (3) C × SU (2) L × U (1) Y gauge symmetry, so can more easily be linked to ultraviolet completions than arbitrary form factors. These merits have not gone unnoticed, as effective field theory (EFT) techniques have received much attention in interpreting available Higgs results [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]. This area, however, is still in its infancy, as such analyses are currently limited on the experimental side by low statistics. Top quark physics, on the other hand, has entered a precision era, with data from the LHC and Tevatron far more abundant. In addition, the top quark plays a special role in most scenarios of Beyond the Standard Model physics, motivating scrutiny of its phenomenology. Furthermore, the top sector is strongly coupled to Higgs physics owing to the large top quark Yukawa coupling, and so represents a complementary window into physics at the electroweak scale. Thus, it is timely to compute the constraints on new top interactions through a global fit of all dimension-six operators relevant to top production and decay at hadron colliders. In a previous work [52], we published constraints on all dimension-six operators that contribute to top pair and single top production only in a global fit. Our fitting approach used techniques borrowed from Monte Carlo event generator tuning, namely the Professor [53] framework. The purpose of this paper is to expand on our previous study by adding new measurements, which are sensitive to a new set of operators not previously examined, including previously unreleased 8 and 13 TeV data and decay observables, and also to provide a more detailed review of our general fitting procedure. The paper is structured as follows. In Section 2 we review the higher-dimensional operators relevant for top quark physics and in Section 3 we review the experimental measurements entering our fit, as well as the limit-setting procedure we adopt. In Section 4 we present our constraints, and discuss the complementarity of LHC and Tevatron analyses, and the improvements obtained from adding differential distributions as well as inclusive rates. In Section 5 we interpret our constraints in the context of two specific new physics models. Finally, in Section 6 we discuss our results and conclude. Higher-dimensional operators In effective field theory language, the Standard Model Lagrangian is the first term in an effective Lagrangian where Λ generically represents the scale of the new physics. From a top-down viewpoint, the higher-dimensional terms that are suppressed by powers of 1/Λ originate from heavy degrees of freedom that have been integrated out. In this way, the low-energy effects of decoupled new physics can be captured, without the need to consign oneself to a particular ultraviolet model. The leading contributions to L eff at collider energies enter at dimensionsix Given the simplicity of how it captures modifications to SM fermion couplings, this basis is well-suited to top EFT. For basis choices of interest in Higgs physics, see e.g. Refs. [55][56][57][58][59], and Ref. [60] for a tool for translating between them. We adopt the same notation as Ref. [54], where T A = 1 2 λ A are the SU (3) generators, and τ I are the Pauli matrices, related to the generators of SU (2) by S I = 1 2 τ I . For the fourquark operators on the left column of eq. (3), we denote a specific flavour combination (q i ...q j )(q k ...q l ) by e.g. O ijkl 4q . It should be noted that the operators O uW , O uG and O uB are not hermitian and so may have complex coefficients which, along with OG and O ϕG , lead to CP-violating effects. These do not contribute to Standard Model spin-averaged cross-sections, though they are in principle sensitive to polarimetric observables such as spin correlations, and should therefore be treated as independent operators. However, currently available measurements that would be sensitive to these degrees of freedom have been extracted by making model-specific assumptions that preclude their usage in our fit, e.g. by assuming that the tops are produced with either SM-like spin correlation or no spin correlation at all, as in Refs. [61,62]. We will discuss this issue in more detail in the next section. With these caveats, a total of 14 constrainable CP-even dimension-six operators contribute to top quark production and decay at leading order in the SMEFT. Experimental inputs The experimental measurements used in the fit are included in Table 1. All these measurements are quoted in terms of 'parton-level' quantities; that is, top quarks and their direct decay products. Whilst it is possible to include particle-level observables, these are far less abundant and they are beyond the scope of the present study. The importance of including kinematic distributions is manifest here. For top pair production, for instance, we have a total of 195 measurements, 174 of which come from differential observables. This size of fit is unprecedented in top physics, which underlines the need for a systematic fitting approach, as provided by Professor. Indeed top pair production cross-sections make up the bulk of measurements that are used in the fit. Single top production cross-sections comprise the next dominant contribution. We also make use of data from charge asymmetries in top pair production, as well as inclusive measurements of top pair production in association with a photon or a Z (ttγ and ttZ) and observables relating to top quark decay. We take each of these categories of measurement in turn, discussing which operators are relevant and the constraints obtained on them from data. Treatment of uncertainties The uncertainties entering our fit can be classed into three categories: Experimental uncertainties: We generally have no control over these. In cases where statistical and systematic (and luminosity) errors are recorded separately, we add them in quadrature. Correlations between measurements are also an issue: the unfolding of measured distributions to parton-level introduces some correlation between neighbouring bins. If estimates of these effects have been provided in the experimental analysis, we use this information in the fit, if they are not, we assume zero correlation. However, we have checked that bin correlations have little effect on our numerical results. There will also be correlations between apparently separate measurements. The multitude of different top pair production cross-section measurements will clearly be correlated due to overlapping event selection criteria and detector effects, etc. Without a full study of the correlations between different decay channels measured by the same experiment, these effects cannot be completely taken into account, but based on the negligible effects of the bin-by-bin correlations on our numerical results we can expect these effects to be small as well. Standard Model theoretical uncertainties: These stem from the choice of parton distribution functions (PDFs), as well as neglected higher-order perturbative corrections. As is conventional, we model the latter by varying the renormalisation and factorisation scales independently in the range µ 0 /2 ≤ µ R,F ≤ 2µ 0 , where we use µ 0 = m t as the default scale, and take the envelope as our uncertainty. For the PDF uncertainty, we follow the PDF4LHC recommendation [101] of using CT10 [102], MSTW [103] & NNPDF [104] NLO fits, each with associated scale uncertainties, then taking the full width of the scale+PDF envelope as our uncertainty estimate -i.e. we conservatively assume that scales and parton densities are 100% correlated. Unless otherwise stated, we take the top quark mass to be m t = 173.2 ± 1.0 GeV. We do not consider electroweak corrections. Only recently a lot of progress has been made in extending the dimension six-extended SM to higher order, see Refs. [105][106][107][108][109][110][111][112][113][114][115][116][117][118]. Including these effects is beyond the scope of this work, also because we work to leading order accuracy in the electroweak expansion of the SM. QCD corrections to four fermion operators included via renormalisation group equations are typically of the order of 15%, depending on the resolved phase space [114]. As pointed out in Ref. [119], these effects can be important in electroweak precision data fits. Interpolation error: A small error relating to the Monte Carlo interpolation (described in more detail in the next section) is included. This is estimated to be 5% at a conservative estimate, as discussed in the following section, and subleading compared to the previous two categories. Fitting procedure Our fitting procedure, briefly outlined in Ref. [52], uses the Professor framework. The first step is to construct an N -dimensional hypercube in the space of dimension six couplings, compute the observables at each point in the space, and then to fit an interpolating function f (C) that parametrises the theory prediction as a function of the Wilson coefficients C = {C i }. This can then be used to rapidly generate theory observables for arbitrary values of the coefficients. Motivated by the dependence of the total cross-section with a Wilson coefficient: the fitting function is chosen to be a second-order or higher polynomial: In the absence of systematic uncertainties, each observable would exactly follow a second-order polynomial in the coefficients, and higher-order terms capture bin uncertainties which modify this. The polynomial also serves as a useful check that the dimension-six approximation is valid. By comparing eq. (4) with eq. (5), we see that the terms quadratic in C i are small provided that the coefficients in the interpolating function γ i,j are small. This is a more robust way to ensure validity of the dimension-six approximation than to assume a linear fit from the start. In practice, to minimise the interpolation uncertainty, we use up to a 4th order polynomial in eq. (5), depending on the observable of interest. The performance of the interpolation method is shown in Figure 1, which depicts the fractional deviation of the polynomial fit from the explicit MC points used to constrain it. The central values and the sizes of the modelling uncertainties may both be parameterised with extremely similar performance, with 4th order performing best for both. The width of this residual mismodeling distribution being ∼ 3% for each of the value and error components is the motivation for a total 5% interpolation uncertainty to be included in the goodness of fit of the interpolated MC polynomial f (C) to the experimentally measured value E: where we sum over all observables O and all bins in that observable i. We include the correlation matrix ρ i,j where this is provided by the experiments, otherwise ρ i,j = δ ij . The uncertainty on each bin is given by σ i = σ 2 th,i + σ 2 exp,i , i.e. we treat theory and experimental errors as uncorrelated. The parameterisation of the theory uncertainties is restricted to not become larger than in the training set, to ensure that polynomial blow-up of the uncertainty at the edges of the sampling range cannot produce a spuriously low χ 2 and disrupt the fit. We hence have constructed a fast parameterisation of model goodness-of-fit as a function of the EFT operator coefficients. This may be used to produce χ 2 maps in slices or marginalised projections of the operator space, which are then transformed to confidence intervals on the coefficients C i , defined by the regions for which where typically CL ∈ {0.68, 0.95, 0.99} and f k (x) is the χ 2 distribution for k degrees of freedom, which we define as k = N measurements − N coefficients . Results The entire 59 dimensional operator set of Ref. [54] was implemented in a FeynRules [120] model file. The contributions to parton level cross-sections and decay observables from the above operators were computed using MadGraph/Madevent [121], making use of the Universal FeynRules Output (UFO) [122] format. We model NLO QCD corrections by including Standard Model K-factors (bin-by-bin for differential observables), where the NLO observables are calculated using MCFM [123], cross-checked with MC@NLO [124,125]. These K-factors are used for arbitrary values of the Wilson coefficients, thus modelling NLO effects in the pure-SM contribution only. More specifically, this amounts to performing a simultaneous expansion of each observable in the strong coupling α s and the (inverse) new physics scale Λ −1 , and neglecting terms ∼ O(α S Λ −2 ). Our final 95% confidence limits for each coefficient are presented in Figure 12; we discuss them in more detail below. Top pair production By far the most abundant source of data in top physics is from the production of top pairs. The CP-even dimension-six operators that interfere with the Standard Model amplitude are As pointed out in Ref. [52], the operator O ϕG cannot be bounded by top pair production alone, since the branching ratio to virtual top pairs for a 125 GeV Higgs is practically zero, therefore we do not consider it here. For a recent constraint from Higgs physics see e.g. Ref. [18,20,24,25]. We further ignore the contribution of the operator O 11 uG , as this operator is a direct mixing of the left-and right-chiral u quark fields, and so contributes terms proportional to m u . We also note that the six four-quark operators of eq. (8) interfere with the Standard Model QCD processesūu,dd →tt to produce terms dependent only on the four linear combinations of Wilson Coefficients (following the notation of Ref. [46]) It is these four that are constrainable in a dimension-six analysis. Finally, we note that the operator O G , whilst not directly coupling to the top at tree-level, should not be neglected. Since it modifies the triple gluon vertex, and the gg channel contributes ∼ 75% (90%) of the total top pair production cross-section at the 8 (13) TeV LHC, moderate values of its Wilson coefficient can substantially impact total rates. We note, however, that in this special case, the cross section modifications are driven by the squared dimension six terms instead of the linearised interference with the SM. Nonetheless, in the interests of generality, we choose to include this operator in our fit at this stage, noting that bounds on its Wilson coefficient should be interpreted with caution. ‡ Representative Feynman diagrams for the interference of these operators are shown in Figure 2. The most obvious place to look for the effects of higher-dimensional terms is through the enhancement (or reduction, in the case of destructive interference) of total cross-sections. Important differences between SM and dimension-six terms are lost in this approach, however, since operators can cause deviations in the shape of distributions without substantially impacting event yields. This is highlighted in Figure 3, where we plot our NLO SM estimate for two top pair distributions, vs. one with a large interference term. Both are consistent with the data in the threshold region, which dominates the cross-section, but clear discrimination between SM and dimension-six effects is visible in the high-mass region, which simply originates from the scaling of dimension-six operator effects as s/Λ 2 § . ‡ We have observed that excluding this operator actually tightens the bounds on the remaining ones, so choosing to keep it is the more conservative option. § One may worry that the inclusion of the final 'overflow' bin in the invariant mass distributions may invalidate the EFT approach. We have performed the global fit without these data points, and found that Limits on these operators can be obtained in two ways; by setting all other operators to zero, and by marginalising over the other parameters in a global fit. In Figure 4 we plot the allowed 68%, 95% and 99% confidence intervals for various pairs of operators, with all others set to zero, showing correlations between some coefficients. Most of these operators appear uncorrelated, though there is a strong correlation between C 1 u and C 1 d , due to a relative sign between their interference terms. Given the lack of reported deviations in top quark measurements, it is perhaps unsurprising to see that all Wilson coefficients are consistent with zero within the 95% confidence intervals, and that the SM hypothesis is an excellent description of the data. In Figure 5, the stronger joint constraints on C G vs C 1 u obtained from including differential measurements make manifest the importance of utilizing all available cross-section information. It is also interesting to note the relative pull of measurements from the LHC and Tevatron, as illustrated in Figure 5. It is interesting to see that although Tevatron data are naively more sensitive to four-quark operators, after the LHC Run I and early into Run II, the LHC data size and probed energy transfers lead to comparably stronger constraints. In our fit this is highlighted by the simple fact that LHC data comprise more than 80% of the bins in our fit, so have a much larger pull. This stresses the importance of collecting large statistics as well as using sensitive discriminating observables. they have little effect on our constraints. This is due to the large experimental uncertainties in this region, and the fact that these bins comprise less than 5% of the total degrees of freedom in our fit, so have little statistical pull. Single top production The next most abundant source of top quark data is from single top production. In our fit we consider production in the t and s channels, and omit W t-associated production. Though measurements of the latter process have been published, they are not suitable for inclusion in a fit involving parton level theory predictions. As is well-known, W t production interferes with top pair production at NLO and beyond in a five-flavour scheme [126][127][128], or at LO in a four-flavour one. Its separation from top pair production is then a delicate issue, discussed in detail in Refs. [129][130][131][132]. We thus choose to postpone the inclusion of W t production to a future study, going beyond parton level. The operators that could lead to deviations from SM predictions are shown below Figure 5: Left: 68%, 95% and 99% confidence intervals on the operators C G vs. C 1 u , considering differential and total cross-sections (contours, red star), and total cross-sections only (lines, white star). Right: Limits on C 33 uG vs. C 1 u , considering both Tevatron and LHC data (contours) and Tevatron data only (lines). As in top pair production there are several simplifications which reduce this operator set. The right-chiral down quark fields appearing in O dW and O ϕud cause these operators' interference with the left-chiral SM weak interaction to be proportional to the relevant down-type quark mass. For example, an operator insertion of O 33 ϕud will always contract with the SM W tb -vertex to form a term of order m b m t C 33 ϕud /Λ 2 . Since m b is much less than bothŝ and the other dimensionful parameters that appear, v and m t , we may choose to neglect these operators. By the same rationale we neglect O (1) qu as its contribution to observables is O(m u ). We have further checked numerically that the contribution of these operators is practically negligible. Finally, all contributing four-fermion partonic subprocesses depend only on the linear combination of Wilson Coefficients: Single top production can thus be characterised by the three dimension-six operators O uW , O ϕq and O t . As noted in the introduction, several model-independent studies have noted the potential for uncovering new physics in single top production, though these have typically been expressed in terms of anomalous couplings, via the Lagrangian -1 -0.5 0 0.5 1 Figure 6: Left: Individual (red) and marginalised (blue) 95% confidence intervals on dimension-six operators from top pair production and single top production (bottom three). Right: Marginalised 95 % bounds considering all data from LHC and Tevatron (green) vs Tevatron only (purple). where q = p t − p b . There is a one-to-one mapping between this Lagrangian and those dimension-six operators that modify the W tb vertex: What, then, is the advantage of using higher-dimensional operators when anomalous couplings capture most of the same physics? The advantages are manifold. Firstly, the power-counting arguments of the previous paragraph that allowed us to reject the operators O dW , O ϕud at order Λ −2 would not be clear in an anomalous coupling framework. In addition, the four-quark operator O (3) qq in eq. (10) can have a substantial effect on single-top production, but this can only be captured by an EFT approach. For a detailed comparison of these approaches, see e.g. Ref. [133]. The 95% confidence limits on these operators from single top production are shown in Fig. (6), along with those operators previously discussed in top pair production. Let us compare these results to our findings of Section 4.1. The bounds on operators from top pair production are typically stronger. The so-called chromomagnetic moment operator O uG is also tightly constrained, owing to its appearance in both the qq and gg channels, i.e. it is sensitive to both Tevatron and LHC measurements. For the four-quark operators, the stronger bounds are typically on the C 1 i -type. This originates from the more pronounced effect on kinematic distributions that they have. The phenomenology of the C 2 i -type operators is SM-like, and their effect becomes only visible in the tails of distributions. The much wider marginalised bounds on these two operators stems from the relative sign between their interference term and those of the other operators, which results in cancellations in the total cross-section that significantly widen the allowed ranges of C i . With the exception of C t , which strongly modifies the single top production cross-section, the individual bounds on the operator coefficients from single top production are typically weaker. This originates from the larger experimental uncertainties on single top production, that stem from the multitude of different backgrounds that contaminate this process, particularly top pair production. For the Tevatron datasets this is particularly telling: the few measurements that have been made, with no differential distributions, combined with the large error bars on the available data, mean that two of the three operators are not constrained at dimension-six ¶ . Still, as before, excellent agreement with the SM is observed. In addition to single-top production, the operator O uW may be constrained by distributions relating to the kinematics of the top quark decay. The matrix element for hadronic top quark decay t → W b → bqq , for instance, is equivalent to that for t-channel single top production via crossing symmetry, so decay observables provide complementary information on this operator. We will discuss the bounds obtainable from decay observables in Section 4.4. Associated production In addition to top pair and single top production, first measurements have been reported [98][99][100] of top pair production in association with a photon and with a Z boson (ttγ and ttZ) . The cross-section for these processes are considerably smaller, and statistical uncertainties currently dominate the quoted measurements. Still, they are of interest because they are sensitive to a new set of operators not previously accessible, corresponding to enhanced top-gauge couplings which are ubiquitous in simple W and Z models, and which allow contact to be made with electroweak observables. The operator set for ttZ, for instance, contains the 6 top pair operators in eq. (8), plus the following There is therefore overlap between the operators contributing to associated production, and those contributing to both top pair and single top. In principle, one should include all observables in a global fit, fitting all coefficients simultaneously. However, the low number of individual ttV measurements, coupled with their relatively large uncertainties, means ¶ Our bounds on these two operators are of the same order, but wider, than a pre-LHC phenomenological study [44], owing to larger experimental errors than estimated there. Early measurements of top pair production in association with a W has also been reported by ATLAS and CMS, but the experimental errors are too large to say anything meaningful about new physics therein; the measured cross-sections are still consistent with zero. Figure 7: Individual 95% confidence intervals for the operators of 14 from ttγ and ttZ production (green) and in the two cases where there is overlap, from single top measurements (blue). that they do not have much effect on such a fit. Instead, we choose to present individual constraints on the operators from associated production alone, comparing these with top pair and single top in what follows. For the former, we find that the constraints on the operators of eq. (14) obtained from ttγ and ttZ measurements are much weaker than those obtained from top pair production, therefore we do not show them here. The constraints on the new operators of eq. (14) are displayed in Figure 7. It is interesting to note that the constraints from associated production measurements are comparable with those from single top production, despite the relative paucity of the former. Decay observables This completes the list of independent dimension-six operators that affect top quark production cross-sections. However, dimension-six operators may also contribute (at interference level) to observables relating to top quark decay. Top quarks decay almost 100% of the time to a W and b quark. The fraction of these events which decay to W -bosons with a given helicity: left-handed, right-handed or zero-helicity, can be expressed in terms of helicity fractions, which for leading order with a finite b-quark mass are where x = M W /m t , y = m b /m t and λ = 1+x 4 +y 4 −2x 2 y 2 −2x 2 −2y 2 . As noted in Ref. [46], measurements of these fractions can be translated into bounds on the operator O uW . (The Cross-sections Helicity fractions Combined C 33 uW Figure 8: 95% bounds on the operator O uW obtained from data on top quark helicity fractions (blue) vs. single top production cross-sections (red), and both sets of measurements combined (purple). ϕq cannot be accessed in this way, since its only effect is to rescale the W tb ϕq /Λ 2 , therefore it has no effect on event kinematics.) The desirable feature of these quantities is that they are relatively stable against higher order corrections, so the associated scale uncertainties are small. The Standard Model NNLO estimates for these are: {F 0 , F L , F R } = {0.687±0.005, 0.311±0.005, 0.0017±0.0001} [134], i.e. the uncertainties are at the per mille level. It is interesting to ask whether the bound obtained on O uW in this way is stronger than that obtained from cross-section measurements. In Figure 8 we show the constraints obtained in each way. Although they are in excellent agreement with each other, cross-section information gives a slightly stronger bound, mainly due to the larger amount of data available, but also due to the large experimental uncertainties on F i . Still, these measurements provide complementary information on the operator O uW , and combining both results in a stronger constraint than either alone, as expected. Charge asymmetries Asymmetries in the production of top quark pairs have received a lot of attention in recent years, particularly due to an apparent discrepancy between the Standard Model prediction for the so-called 'forward-backward' asymmetry A FB in top pair production where ∆y = y t − yt, and a measurement by CDF [135]. This discrepancy was most pronounced in the high invariant mass region, pointing to potential TeV-scale physics at play. However, recent work has cast doubts on its significance for two reasons: Firstly, an updated analysis with higher statistics [90] has slightly lowered the excess. Secondly, a full NNLO QCD calculation [136] of A FB showed that, along with NLO QCD + electroweak calculations [137][138][139] new physics perspective, it is difficult to accommodate all of this information in a simple, uncontrived model without tension. Still, in an effective field theory approach, deviations from the Standard Model prediction of A FB take a very simple form. A non-zero asymmetry arises from the difference of four-quark operators: where β = 1 − s/4m 2 t is the velocity of the tt system * * . Combining this inclusive measurement with differential measurements such as dA FB /dM tt allows simultaneous bounds to be extracted on all four of these operators. Therefore it is instructive to compare the bounds obtained on C 1,2 u,d from charge asymmetries to those obtained from tt cross-sections. Again it is possible to (indirectly) investigate the complementarity between Tevatron and LHC constraints. Though the charge symmetric initial state of the LHC does not define a 'forward-backward' direction, a related charge asymmetry can be defined as: making use of the fact that tops tend to be produced at larger rapidities than antitops. This asymmetry is diluted with respect to A FB , however. The most up-to-date SM prediction is A C = 0.0123±0.005 [139] for √ s = 7 TeV. The experimental status of these measurements is illustrated in Figure 9. The inclusive measurements of A FB are consistent with the SM * * Contributions to A F B also arise from the normalisation of A F B and the dimension-six squared term [140][141][142], which we keep, as discussed in sections 3.3 and 4. expectation, as are those of A C . The latter, owing to large statistical errors, are also consistent with zero, however, so this result is not particularly conclusive. Since these are different measurements, it is also possible to modify one without significantly impacting the other. Clearly they are correlated, as evidenced in Figure 9, where the most up to date measurements of A FB and A C are shown along with the results of a 1000 point parameter space scan over the four-quark operators. This highlights the correlation between the two observables: non-resonant new physics which causes a large A FB will also cause a large A C , provided it generates a dimension-six operator at low energies. We have used both inclusive measurements of the charge asymmetries A C and A FB , and measurements as a function of the top pair invariant mass M tt and rapidity difference |y tt |. In addition, ATLAS has published measurements of A C with a longitudinal 'boost' of the tt system: β = (|p z t + p z t )|/(E t + Et) > 0.6, which may enhance sensitivity to new physics contributions to A C , depending on the model [143]. Since A FB = 0 at leading-order in the SM, it is not possible to define a K-factor in the usual sense. Instead we take higher-order QCD effects into account by adding the NNLO QCD prediction to the dimension-six terms. In the case of A C , we normalise the small (but non-zero) LO QCD piece, to the NLO prediction, which has been calculated with a Monte Carlo and cross-checked with a dedicated NLO calculation [139]. The above asymmetries have been included in the global fit results presented in Figure 12. However, it is also interesting to see what constraints are obtained on the operators from asymmetry data alone. To this end, the 95% confidence intervals on the coefficients of the operators O 1,2 u,d from purely charge asymmetry data are shown in Figure 10. Unsurprisingly, the bounds are much weaker than for cross-section measurements, with the O 2 i -type operators unconstrained by LHC data alone. Despite the small discrepancy between the measured A FB and its SM value, this does not translate into a non-zero Wilson coefficient; as before, all operators are zero within the 95% confidence intervals. At 13 TeV, the asymmetry A C will be diluted even further, due to the increased dominance of the gg → tt channel, for which A C = 0. It is therefore possible that charge asymmetry measurements (unlike cross-sections) will not further tighten the bounds on these operators during LHC Run II. Contribution of individual datasets As well as the constraints presented in Figure 12, it is also instructive to examine the quality of fit for different datasets. We quantify this by calculating the χ 2 per bin between the data and the global best fit point, as shown in Figure 11. Overall, excellent agreement is seen across the board, with no measurement in obvious tension with any other. The largest single contributors to the χ 2 come from the rapidity distributions in top pair production. It has been known for some time that these are quite poorly modelled with Monte Carlo generators, especially in the boosted regime. It is quite likely that this discrepancy stems from the QCD modelling of the event kinematics, rather Figure 10: Marginalised 95% confidence intervals on top pair four quark operators from charge asymmetries at the LHC and Tevatron. than potential new physics. Moreover, in a fit with this many measurements, discrepancies of this magnitude are to be expected on purely statistical grounds. At the level of total cross-sections, the vanishingly small contributions to the χ 2 stem from two factors: the O(10%) measurement uncertainties, which are even larger in hadronic channels, and the large scale uncertainties from the large kinematic range that is integrated over to obtain the total rate. Single top production measurements are also in good agreement with the SM. The associated production processes ttγ and ttZ, along with the charge asymmetry measurements from the LHC, have a very small impact on the fit, owing to the large statistical uncertainties on the current measurements. For the former, this situation will improve in Run II, for the latter the problem will be worse. The forward-backward asymmetry measurements from CDF remain the most discrepant dataset used in the fit. Constraining UV models As an illustration of the wide-ranging applicability of EFT techniques, we conclude by matching our effective operator constraints to the low-energy regime of some specific UV models. These models serve purely illustrative purposes. Axigluon searches Considering top pair production, one can imagine the four operators of eq. (9) as being generated by integrating out a heavy s-channel resonance which interferes with the QCD qq → tt amplitude. One particle that could generate such an interference is the so-called axigluon. These originate from models with an extended strong sector with gauge group SU (3) c1 ×SU (3) c2 which is spontaneously broken to the diagonal subgroup SU (3) c of QCD. In the most minimal scenario, this breaking can be described by a non-linear sigma model Here π a represent the Goldstone bosons which form the longitudinal degrees of freedom of the colorons, giving them mass, t a are the Gell-Mann matrices, and f is the symmetry breaking scale. The nonlinear sigma fields transform in the bifundamental representation of SU (3) c1 × SU (3) c2 : The physical fields are obtained by rotating the gauge fields G 1 and G 2 to the mass eigenstate basis where the mixing angle θ c is defined by The case of an axigluon corresponds to maximal mixing θ = π/4, i.e. g 2 s1 = g 2 s2 = g 2 s /2. Taking the leading-order interference with the SM amplitude for qq → tt, in the limit s << M 2 A , we find that the axigluon induces the dimension-six operators Substituting the marginalised constraints on the 4-quark operators, we find this translates into a lower bound on an axigluon mass. M A 1.4 TeV at the 95% confidence level. Since this mass range coincides with the overflow bin of figure 3, this bound creates some tension with the validity of the EFT approach in the presence of resonances in the tt spectrum (for a general discussion see Ref. [114,144,145]); at this stage in the LHC programme indirect searches are not sensitive enough to compete with dedicated searches. W searches Turning our attention to single top production, we consider the example of the operator O qq being generated by a heavy charged vector resonance (W ) which interferes with the SM amplitude for s-channel single top production: ud → W → tb. The most general Lagrangian for such a particle (allowing for left and right chiral couplings) is (see e.g. Ref. [146].) We take the generic coupling g W = g SM . Since we are considering the interference term only, which must have the same (V − A) structure as the SM, we can set f R = 0. Considering the tree-level interference term for between the diagrams for ud → W , W → tb, and taking the limit s M 2 W (we also work in the narrow-width approximation Γ W , Γ W M W , M W ), we find C 3,1133 which, using our global constraint on O t , translates into a bound M W 1.2 TeV. These bounds are consistent with, but much weaker than, constraints from direct searches for dijet resonances from ATLAS [147,148] and CMS [149], which report lower bounds of {M A , M W } > {2.72, 3.32} TeV and {M A , M W } > {2.2, 3.6} TeV respectively. It is unsurprising that these dedicated analyses obtain stronger limits, given the generality of this fit. Again this energy range is resolved in our fit thus in principle invalidating the EFT approach to obtain eq. (25). Nonetheless, these bounds provide an interesting comparison of our numerical results, whilst emphasising that for model-specific examples, direct searches for high-mass resonances provide stronger limits than general global fits. Conclusion In this paper, we have performed an up-to-date global fit of top quark effective field theory to experimental data, including all constrainable operators at dimension six. For the operators, we use the 'Warsaw basis' of Ref. [54], which has also been widely used in the context of Higgs and precision electroweak physics. We use data from the Tevatron and LHC experiments, including LHC Run II data, up to a centre of mass energy of 13 TeV. Furthermore, we include fully inclusive cross-section measurements, as well as kinematic distributions involving both the production and decay of the top quark. Counting each bin independently, the total number of observables entering our fit is 227, with a total of 13 contributing operators. Constraining the coefficients of these operators is then a formidable computational task. To this end we use the parametrisation methods in the Professor framework, first developed in the context of Monte Carlo generator tuning [53], and discussed here in Section 3. We perform a χ 2 fit of theory to data, including appropriate correlation matrices where these have been provided by the experiments. We obtain bounds on the Wilson coefficients of various operators contributing to top quark production and decay, summarised in Figure 12, in two cases: (i) when all other coefficients are set to zero; (ii) when all other operators coefficients are marginalised over. The numerical values of these constraints are also shown in Table 2. Our stronger constraints are on operators involving the gluon, as expected given the dominance of gluon fusion in top pair production at the LHC (for which there is more precise data). Four fermion operators are constrained well in general, with weaker constraints coming from processes whose experimental uncertainties remain statistically dominated Figure 12: 95% confidence intervals for the dimension-six operators that we consider here, with all remaining operators set to zero (red) and marginalised over (blue). In cases where there are constraints on the same operator from different classes of measurement, the strongest limits are shown here. The lack of marginalised constraints for the final three operators is discussed in Section 4.3. (e.g. ttV production). We have quantified the interplay between the Tevatron and LHC datasets, as well as that between different measurement types (e.g. top pair, single top). Our results currently agree well with the SM only, which is perhaps to be expected given the lack of reported deviations in previous studies. However, the fact that this agreement is obtained, in a wide global fit, is itself testament to the consistency of different top quark measurements, with no obvious tension between overlapping datasets. There are a number of directions for further study. Firstly, we can improve the theory description in our fit, to include higher order QCD corrections in a more rigorous way, as well as moving away from parton level observables. Secondly, new data from LHC Run II is continuously appearing, and can be implemented in our fit as soon as it is available. The era of performing large global fits to widely different data in the top quark sector is now upon us, and our work on this area is ongoing.
9,919
2015-12-10T00:00:00.000
[ "Physics" ]
Thermal Characteristics of Oxazolidone Modified Epoxy Anhydride Blends Oxazolidone modified epoxy resin blends can be prepared with dianhydrides to form thermosets with higher thermal stability. Curing the oxazolidone modified resin is an addition reaction, which offers better thermal properties and improved chemical resistance. Chemical reactions that take place during cure, determine resin morphology and properties of cured thermosets. Such epoxy resin systems are used for various reinforcements because they offer significant advantage over metals in area of weight saving and corrosion resistance and for use in glass fabric reinforced flame retardants. Anhydrides today are a major class of curing agents for epoxies because epoxy-anhydride systems exhibit low viscosity and long pot life, low exothermic heats of reaction and little shrinkage when cured at elevated temperatures, hence anhydride curing systems are used for curing epoxies to obtain higher thermal stability. The properties of oxazolidone modified epoxies can be tailored by the choice of suitable amount of curing agents, as a result, the ratio of the epoxy to the anhydride becomes an important factor in deciding the final material performance.Temperature is a major influence on cure conditions. This reaction was conducted in solution, using N,N Dimethyl Formamide (DMF) as solvent and the product was characterized by Fourier Transform Infra Red Spectroscopy (FTIR) Figure 1.FTIR of the synthesized polymer shows characteristic peak for oxazolidone at 1754 cms -1 whereas the peak for DGEBA at 915 cms -1 is almost nonexistent, indicating that all the epoxy groups have been consumed.Peak for isocyanate at 2270 cms -1 is also absent, indicating that all the isocyanate has reacted with epoxy to produce oxazolidones.1) Benzophenone Tetracarboxylic Di anhydride (BTDA) Results and Discussions In epoxy-anhydride curing reaction, less than stoichiometric ratios of curing agents are used because of significant homopolymerization.The epoxy terminated oxazolidone is converted by means of a crosslinking reaction into a three dimensional hard thermoset (Younes, Wartewig, Lellinger, Strehmel, & Strehmel, 1994).The ring opening mechanism governs the reaction between the epoxy and the anhydride (Unnikrishnan, Thachil, & Eby Thomas, 2006).The mechanism of anhydride cure is complex and both etherification and esterification can occur.The anhydride must first be converted into its monoacid/monoester for the reaction to occur.Secondary alcohols from the epoxy backbone react with anhydride to give a half ester which in turn reacts with an epoxy group to give the diester.Tertiary amines are used in small amounts to accelerate the curing (Pham & Marks, 2004).During cure, the lone pair of electrons on the nitrogen atom of amine, help in the ring opening of the anhydride group to form a complex.This in turn reacts with the epoxy group to form an ether linkage. Thermogravimetric Analysis (TGA) of the Oxazolidone Anhydride Blends Thermal stability of oxazolidone modified epoxy with stoichiometric amounts of dianhydrides was determined by recording TG/DTG traces in N 2 atmosphere at a constant heating rate of 10°/min.TGA studies were carried out in HIResTGA-2950 Thermogravimetric Analyser and weight loss Vs temperature plots were obtained and are shown in Figure 2-7. Table 1, shows the TGA analysis data of anhydride epoxy blends, it is evident that the onset, midpoint and T end are of the same order for almost all the blends.It was observed that the char yield progressively increases as the percentage of oxazolidone increased in the blend because of the presence of increased heterocyclic groups in the oxazolidone blends.However, the presence of 50 % oxazolidone in the blend shows a decrease in the char yield, probably because of the splitting of the oxazolidone and anhydride in the blend. Data obtained during the thermogravimetric studies on various blends was carried out at constant heating rate of 10°/min. The TGA thermograms show a single sharp degradation for all the oxazolidone-anhydride blends, irrespective of the amount percent of oxazolidone used, indicating compatibility of the system.The above thermograms show a single sharp degradation peak in the region 380-386 ℃ which is also indicative of the homogeneity in the networks formed during cure. Conclusions Synthesized linear oxazolidone modified epoxy can be successfully cured anhydride-BTDA.Anhydride-cured epoxies can be of much use in high temperature resistant polymers and they also exhibit better aqueous acid resistance.Hence, oxazolidone epoxy-anhydride blends can be termed as the polymer of the future in the field of high temperature resistant polymers because in the modern world of plastics, polymers having higher thermal resistance find a very important and significant place. TGA studies showed that 15 % of the oxazolidone in the blend was sufficient to give maximum increase in thermal stability and that 0.05 mole of catalyst (BDMA) was sufficient. Thermal studies also prove that thermal stability of the blends containing 15 % of oxazolidone in the blend was better as compared to other ratios and showed minimum rate of degradation in the system.Hence in our study, 15 % of the oxazolidone modified epoxy is optimum for maximum thermal stability of the thermoset. Figure 2 . Figure 1.FTIR of the synthesized oxazolidone modified epoxy resin Table 1 . TGA analysis of BTDA cured oxazolidone modified resins
1,159.2
2012-05-27T00:00:00.000
[ "Materials Science", "Chemistry" ]
The spatial connection network pattern of urban agglomerations in the Pearl River Delta Urban agglomeration is a spatial form of highly developed comprehensive cities and an important driving force for regional economic development. Based on the data of highways, regular-speed trains, high-speed trains and Baidu index data among 9 cities in the Pearl River Delta, this paper uses spatial visualization and social network analysis to study the spatial connections and network patterns of urban agglomerations from multi-factor flows. The results show that the spatial connection between cities in the Pearl River Delta mainly radiates outward from Guangzhou, Shenzhen and Dongguan, and connects with the network formed by secondary cities such as Foshan, Huizhou and Jiangmen. A multi-core development urban network pattern is formed by an absolute core circle and a peripheral low-density circle. The core circle is consist of Guangzhou-Shenzhen-Dongguan, and the low-density circle is consist of 6 city nodes with low degree value. The development of regional integration can strengthen the cooperation and exchange among cities, and it also can optimize the spatial structure of the urban agglomeration network. Introduction In the context of economic globalization and regional economic integration, urban agglomerations not only determine the future development of the regional economy, but also determine the role and status of country in the global economic structure. At present, the evolution of the spatial structure of urban agglomerations becomes more complicated due to the increasingly developed communication and transportation facilities, and urban agglomerations show the characteristics of networked connections [1] . Urban agglomeration refers to a relatively comprehensive urban "aggregate" constructed in a specific area with one or more megacities as the regional economic core, relying on modern transportation methods and highly developed information networks. Infrastructure networks such as transportation and communications can help form urban agglomerations with close economic ties and highly integrated characteristics. Compared with a single city, the advantage of Urban agglomeration lies in their ability to accelerate the flow of resources, strengthen industrial cooperation and break administrative restrictions. Therefore, urban agglomerations will become an important driving force for future economic development. However, affected by geographical location, natural resources, and development policies, the interconnected structure and spatial organization of urban agglomerations are very unique, resulting in different levels of urban agglomeration development. In order to clarify the direction of urbanization construction and promote the development of underdeveloped areas, it is of great practical significance to study the spatial connections and functional networks of each existing urban agglomeration. Traditional urban spatial organization research discusses the development of urban clusters on the basis of the central place theory, emphasizing the leadership of cities, while ignoring the mutual assistance and cooperation between cities. With the development of economic globalization and regional economic integration, the concept of urban networks breaks the limitations of traditional theories and provides a new way to understand the relationship between cities. Castells [2] proposed the concept of "flow space", which expresses a way of sharing social resources without relying on the proximity of space, and the network modeling of functional connections between cities appears. At present, many researches about " space of flows" and Urban Network focused on traffic and transportation flow [3.4] , information transmission flow [5.6.7] , industrial organization relationship flow [8] , and capital flow [9] have studied by many scholars. These studies have deepened our understanding of the interaction between cities provided a reference for the study of urban networks in small areas. For example, Taylor [10] established an interlocking networking model, which combines diverse urban data to study the connections between the world, countries or regions under multiple scales. Guimera [11] uses global aviation data to study urban networks and the role played by the city; Grubesic [12] et al. quantified the accessibility of the city through Internet facilities. In recent years, domestic scholars have revealed the network structure of the urban system through different factor flows at different research scales. For example, An Yujing [13] et al. analyzed the spatial organization structure of the Yangtze River Delta by combining multi-factor flows. Jiang Daliang and Sun Ye [14] used Baidu Index to construct the spatial pattern of the information flow network within the urban agglomeration. On the whole, the existing urban network research mainly presents the characteristics of diversified perspectives and methods and focused more on exploring the urban spatial pattern under a single economy or a single factor flow, but study less in exploring the urban network pattern under the effect of multiple factors flows. The Pearl River Delta, a typical representative of China's regional economic development, is one of the Urban agglomeration representatives with the fastest growth in traffic information in recent years. Under the current background of the development of big data, its regional spatial network pattern may undergo major changes. In view of this, this article uses the Pearl River Delta urban agglomeration as the research area. A comprehensive regional network based on highways, ordinary-speed railways, high-speed railways and information network data is built to explore the characteristics of the regional network structure under the combined effect of multiple factor streams. It is expected to provide scientific support for the overall coordinated development and spatial structure optimization of the urban agglomeration in the Greater Bay Area. Data sources The original data in this paper are the passenger traffic frequency of automobiles, regular-speed trains, high-speed trains and Baidu index among 9 cities in the Greater Bay Area. The data of ordinary trains (numbers starting with K, Z, T and 4 digits) and high-speed trains (numbers starting with G/D/C) are from the 12306 Railway Customer Service Center (www.12306.com), and the data of car frequencies are from the source on the 114 fare network (www.keyunzhan.com). Due to the relatively fixed schedule of road and railway trips, 1d (November 18, 2020) data is selected as a representative. The information network data is searched and obtained through the "regional comparison" function of the "Baidu Index" interface, and selected the monthly average of user attention between two cities in November 2020. and the monthly average of user attention between two cities in November 2020 is selected. The shortest travel time data between two cities is extracted based on normal-speed trains and high-speed trains. The shortest time-consuming transit route between cities without direct trains is extracted according to the principle of the shortest time and shortest distance, and transit and stay time is not taken into consideration. The shortest road arrival time comes from the shortest arrival time between two cities on the Baidu map. The economic data involved in the article comes from the "Guangdong Statistical Yearbook-2019" [15] . Research methods. This paper adopts a social network analysis model to study the network structure of urban agglomerations, and the analysis is based on the strength of relationships between cities. At present, most scholars use the gravity model constructed by urban GDP and population [16][17][18] to calculate the strength of urban relations. Due to the two indicators are simply used, the analysis results are one-sided and cannot measure the comprehensive capabilities of the city. The intensity of urban flow is the intensity of factor flow generated when economic agglomeration and diffusion occurs between cities. It uses the level of labor to calculate the capacity of each region in each industry to provide services to other regions, thereby more fully reflecting inter-regional factors flow environment and capabilities. Some studies use the urban flow intensity model to identify and compare the spatial structure of urban agglomerations. Other studies have verified that urban flow intensity can reflect the closeness of inter-regional connections, which is a more reliable indicator of spatial relationship description. Therefore, this article introduces the urban flow model into the basis of the general gravity model to quantitatively study the comprehensive capabilities of cities. The improved gravity model measures the relationship between cities and reflects the exchange of factors such as production, life, information, and technology between cities. And interaction, in order to study the network characteristics of urban where ij R is the closeness of the relationship between city i and city j ; i m 、 j m are the quality of cities i and j respectively; b is sparse distance friction; ij D is the distance between the two cities. (2) The formula for measuring urban flow intensity [22] F N E   (2) Where N is the functional benefit of city, that is, the actual impact produced by the external functional capacity of unit between cities; E is the external functional capacity of the city, reflecting the size of the city's external functional capacity. Road passenger transport network The 9 cities in the Pearl River Delta constitute 72 road Road passenger transport network. First, the density of inter-regional connections is analyzed. Compared with the other three cities, the six cities, Guangzhou, Shenzhen, Dongguan, Foshan, Zhongshan, and Foshan, have higher inter-city connections, and the frequency of road transportation is also higher. In terms of the strength of the connection, the City-pair with strong connections are Guangzhou-Shenzhen, Zhuhai-Guangzhou, Shenzhen-Dongguan, and Guangzhou-Dongguan. The transportation efficiencies of the four city-pair are respectively 0.815min/ km 、 0.878min/ km 、 1.068min/ km 、 1.089 min/ km . 3.2.Common Slow Train Network There are 31 common slow train networks in 9 core cities of the Pearl River Delta. In the perspective of inter-regional connection density, the cities along the Guangzhou-Shenzhen Railway, such as Guangzhou, Dongguan and Shenzhen, have frequent connections and strong connection density. Among them, Guangzhou-Shenzhen and Dongguan-Shenzhen have relatively high connections, and the average daily trains are about 35. This is mainly due to the "Guangdong-Shenzhen" railway connecting the intra-regional network of the urban agglomeration in the Pearl River Delta. 3.3.Express train network There are 60 high-speed train networks in the 9 core cities of the Pearl River Delta. Compared with common slow train networks, high-speed trains are more closely connected. In the perspective of inter-regional connection density alone, High-speed trains not only have a high connection density in cities around the Guangzhou-Shenzhen line, but increase and expand the connection density and urban space of Guangzhou-Zhuhai, Zhaoqing-Huizhou, Guangzhou-Jiangmen and other cities. In the perspective of inter-city connections, not only the connection density among Guangdong and Shenzhen is relatively high, but also the connection density between Guangzhou-Zhuhai, Zhaoqing-Foshan, Guangzhou-Zhongshan and other cities. In the perspective of the connection lines density among cities, the strongest connection within the urban agglomeration of the Pearl River Delta is Guangzhou-Shenzhen, with 187 regular-speed trains running daily, followed by Guangzhou-Foshan and Zhongshan-Zhuhai, 180 high-speed trains and 104 high-speed trains are operated daily. Information flow The number of information networks in the 9 core cities of the Pearl River Delta urban agglomeration is 72. In the perspective of inter-regional contact density, the contact density of information network in the Guangzhou, Shenzhen, Dongguan, Zhongshan, Zhuhai, and Foshan is higher than that of other three cities. In the perspective of the contact strength of each City-pair, the information network connection of Guangzhou-Shenzhen is 1096, Shenzhen-Huizhou is 748, Guangzhou-Foshan is746, and Guangzhou-Zhuhai is 696. On the basis of the above urban networks, it can be seen that road transportation and high-speed rail information networks constitute the basic context of the spatial network of the Pearl River Delta. However, due to comprehensive factors such as city scale, economic development level, and inter-city high-speed rail, the time-space differences in urban time between cities is reduced. The development of Regional integration is further accelerated, and the external connection network of cities is breaded. City correlation analysis The original data is standardized by formula (1) and (7), and highways, high-speed railways, general-speed railways, and information flows are given equal weight to construct a comprehensive urban network connection matrix in the Greater Bay Area. The comprehensive connection network diagram (Fig. 3) of Greater Bay Area is visualized by ArcGIS. The comprehensive network density of the urban agglomerations in the Greater Bay Area is 0.42. The urban spatial network mainly radiates outward with Guangzhou, Dongguan, and Shenzhen as the core, and secondary cities such as Foshan, Huizhou, and Jiangmen form connections. Its radiation flows mainly in Guangzhou-Shenzhen, Guangzhou-Dongguan, Shenzhen -Dongguan and other cities, and the urban spatial network of the Greater Bay Area presents a multi-core development model. Judging from the comprehensive connection network diagram of the urban agglomeration in the Greater Bay Area, the regional connection density takes Guangzhou -Dongguan-Shenzhen as the core circle, and gradually decreases in the outer layer. In addition, the connection density in the eastern part of the entire Bay Area is significantly higher than that in the western part. Therefore, the uneven spatial structure of the urban agglomerations in the Greater Bay Area is prominent. The entire region presents a "core-edge" structure. The cluster spatial structure shows a positive correlation. For example, Zhaoqing and Jiangmen have low per capita GDP, and these two cities are at the edge of the entire spatial structure. Combining with the four element maps and the comprehensive connection network map of the Greater Bay Area urban agglomeration, it can be seen that Guangzhou-Shenzhen-Dongguan is an important node of the entire urban agglomeration network and plays a guiding role in the development of regional integration. On the one hand, the development model of the Greater Bay Area has strengthened the radiation effect on surrounding cities and promoted the economic development of the Greater Bay Area. On the other hand, several fringe cities with lower degree value move closer towards the core circle in the regional integration policy. By increasing the exchanges between cities, they constantly gain more development space. Centrality analysis The Ucinet 6.0 software is used to calculate the structural characteristic parameters of the comprehensive connection network of the urban agglomeration in the Pearl River Delta, and ArcGIS 10.3 is used for spatial visualization (Figure 4). It can be seen that the regional agglomeration characteristics of the high-level centrality area are obvious. Guangzhou-Shenzhen-Dongguan forms the core circle of the urban agglomeration network, and other six cities, such as Foshan, Zhuhai, Zhaoqing, Zhongshan, Jiangmen, and Huizhou, form the border circle. The "core-periphery" structure of the entire urban agglomeration network is obvious. Among them, Guangzhou's dominant position is stable, and the communication capacity of high-value internal cities is significantly higher than that of other cities, further demonstrating the obvious integration phenomenon of the Greater Bay Area. It can be seen from the functional circle and radiation influence range of each city that Guangzhou is in an absolute dominant position, and Foshan and Zhaoqing are heavily influenced by Guangzhou. This is the same as the previous policy of "Guang-fo-Zhao integration". The phenomenon of two-level differentiation shows that the cities in the core circle have a strong grasp of regional integration. This kind of 7 differential attraction to population caused by intercity traffic, information network and city functions with different characteristics. While strengthening the population agglomeration effect, it also spreads the people flow to the surrounding small and medium-sized cities. Thus, modern cities are urged to continuously expand urban cyberspace under the action of "spatial flow". With "spatial flow" as the carrier and the flow of "people" as the core, various information technologies not only promote the concentration and dispersion of human resources inside and outside region, but also promote the development of urban networks. The increasingly perfect urban rapid transportation network, especially the seamless connection of subways, intercity railways and high-speed railways, together with mobile communication technology, effectively supports the demand for enhanced economic and social connections inside and outside city, and will eventually enters a new era of " fluid space" based on mobile information technology [23]. Undoubtedly, the "flow" and "openness" of space brought by the powerful social power will have a positive impact on the reconstruction and networking of the national and regional urban systems [24]. Conclusion This paper analyzes the urban spatial connections and network pattern of the Greater Bay Area from the perspective of multiple comprehensive factor flow by selecting highways, regular-speed trains, high-speed trains and Baidu index data of 9 cities in the Pearl River Delta. The following conclusions are drawn. First, the spatial connections among cities in the Pearl River Delta mainly radiate outward from Guangzhou, Shenzhen, and Dongguan as the core, and connects with the network formed by secondary cities such as Foshan, Huizhou, and Jiangmen. Cities at different levels have different effects in the future development of regional spatial structure. Among them, the Guangzhou-Dongguan-Shenzhen connection line is the most closely connected region, which will play a guiding role in the development of the spatial structure of urban agglomerations in the future. Cities such as Zhongshan and Foshan, play the function of connecting "transit stations". On the one hand, Guangzhou, Shenzhen, and Dongguan are the radiation areas of the core cities, which are responsible for higher-level functions and services from the leading cities to help their own economic development. On the other hand, neighboring cities in the core radiation area produce material exchanges at the production and consumption levels to form an urban system, which radiates peripheral cities and affects the overall spatial structure of the region. Second, it has formed an urban network pattern consisting of an absolute core circle composed of Guangzhou-Shenzhen-Dongguan and a peripheral low-density circle composed of other six city nodes with low degree. The overall pattern is multi-core development. The current spatial structure of urban agglomerations in the Pearl River Delta presents an obvious "core-periphery" structure. However, with the globalization trend and the strengthening of regional integration policies, the development 8 opportunities of regional fringe cities will increase in the future. In the future, exchanges and cooperation among cities will be more extensive, promoting a more balanced development of urban agglomerations in the Greater Bay Area. At that time, the agglomeration effect of the urban agglomeration in the Greater Bay Area will continue to be transformed into a diffusion effect. The spatial structure of the region will change from an unbalanced "core-periphery" to a balanced "network-node" pattern. The mobility of capital, technology, information and labor in the region is increasing. The increasing mobility in the region will greatly enhance the ability of the Greater Bay Area city clusters to absorb and allocate resources on a global scale, and help the Greater Bay Area city clusters move towards a networked development era. Third, the development of regional integration can strengthen the cooperation and exchange among cities and optimize the spatial structure of the urban agglomeration network in the Greater Bay Area. The fluidity radiation area of each city in the Greater Bay Area is clearly integrated, and the spatial form of the radiation influence areas of each city also indicates the functional circle and sphere of influence of each city. Foshan and Zhaoqing are greatly affected by the radiation of Guangzhou. The liquidity radiation areas of Shenzhen, Dongguan and Huizhou are intertwined. Cities such as Zhuhai, Jiangmen and Zhongshan are in the marginal zone. Based on multivariate factor stream data, this paper analyzes the spatial connections and network patterns of urban agglomerations in the Pearl River Delta, and makes a preliminary discussion on its influencing factors, which can provide references for the study of regional spatial structure. The analysis of the urban agglomeration network based on the traffic information flow only represents a quantifiable perspective, but cannot absolutely reflect the real connections within the region. Undeniable, the information flow of highways, railways, and Baidu can more closely connect different levels of city nodes in the urban agglomeration, and enhance and optimize the spatial connection network structure to a certain extent. However, due to the time-sensitive characteristics of passenger traffic and Baidu index data, data change and update rapidly, and the acquisition of real-time updates and accurate data is quite limited. Thus, the research results have certain limitations. Based on data availability considerations, only provincial-administered cities are selected as the basic research unit, and the flow data of sub-administrative units such as counties and towns cannot be counted, which is not conducive to in-depth analysis of the detailed characteristics of the regional connection direction and spatial structure. In addition, the evaluation and correlation mechanism of economic output efficiency in cyberspace are also the direction to be further studied.
4,684.4
2021-01-01T00:00:00.000
[ "Economics" ]
Assessing the E ffi ciency of Sustainable Cities Using an Empirical Approach : Sustainability is a multidisciplinary discipline posing a di ffi cult problem as a result of its integrated assessment. From a broad perspective, it considers the impact of human activities (using di ff erent resources) and natural conditions on local environments. Urban development has been identified as one of the most important reasons for environmental and social degradation. To address the complexity of sustainability and its impact, policymakers need to be equipped with the right toolkit to foresee the integrated e ff ect of projects and plans on urban sustainability more e ff ectively in their policy design. In this paper, we propose a tool to assess the sustainable performance of urban areas through a common framework of indicators which provides an integrated measurement based on the relative e ffi ciency of key input variables on desirable and undesirable outputs. Using Data Envelopment Analysis (DEA), we propose a procedure for determining the relative e ffi ciency of relevant urban areas, proposing this method as a candidate for integrated sustainability measurement. The selection of variables is based on dimensions which can be addressed from a political perspective for achieving more desirable outputs, or reducing the undesirable ones, controlling for key resources as much as possible. Our analysis takes a comprehensive scope including an environmental and socioeconomic perspective. This will be useful to identify weaknesses and strengths to improve the integrated performance of cities. Our array of indicators, based on standardized key performance indicators (KPIs) will enable policymakers to gather an insightful impact of their proposals in urban sustainability carrying out a global sustainability impact assessment through DEA. The main goal is to gather the urban experience of transforming cities into smarter cities and putting technological progress at the service of their societies. Introduction The expansion of urban environments is linked to global challenges of sustainability, particularly in regions where the process of urbanization is still unfolding, or the urban metabolism is undergoing a thorough regeneration [1]. In urbanized regions such as Europe, where more than 70% of people are urban dwellers, sustainability is one of the most important challenges, especially concerning the use of energy, economic performance, de-carbonization of infrastructure, wastewater management, and other ecosystems from cities and urban communities [2]. The consumption of these resources can play a crucial role in the development of the UN sustainability goals [3]. dioxide, and wastewater that are associated with desirable production and whose reductions are made possible by effective operational management [13]. However, there are other applications that do not consider undesirable outputs [14]. Used carefully, DEA potentially can facilitate analysis of the main policy issues and improve business strategies to enhance the sustainability of cities. Yang et al. [15] evaluate regional environmental efficiency in China over 10 years based on the super-efficiency DEA model to observe regional disparities. Those super-efficiency models are also useful to assess benchmark performances. Recently, Zhao et al. [16] link the socio-economic and environmental perspective in the evaluation of cities with a linked parallel system of two subsystems to understand the operational process of the sustainable development system. We study how efficient cities use their inputs to produce desirable outputs. Our main purpose is to evaluate whether cities are using efficiently their available inputs from an environmental perspective. Then, we can deduce some sociological consequences without causal endorsement. Amongst all the alternatives available, we opt for the Slack-Based Inefficiency (SBI) model because it relies on slacks. This enables us to determine the percentages that cities should reduce their inputs in order to reach total efficiency. Efficiency is obtained by subtracting the quantity of SBI against 1, so the best performers (in the benchmark) have an efficiency score equal to 100%. Decisions on what is efficient or what is not depend on the outputs expected to achieve using available inputs from the territory to address local goals. As Gottdiener and Hutchinson [17] conclude, the human ecosystem framework may look like a shopping list of system components. However, it is crucial to realize that the most significant feature of the framework is the fact that it points out the interactions among specific natural, social, and cultural components of the metropolis. This recognition prompts ecologists to be concerned with how people use and behave in the metropolitan ecosystem in a spatially explicit way. Therefore, further to political decisions on efficiency, we emphasize that the human ecosystem is the result of a complex interaction in which social issues such as poverty, inequality, environmental justice, and public participation in decision-making and space production, in sum equities, must be taken into account. Ahern [18] addresses the dynamic interactions between nature and society, how social change influences the environment and how environmental change shapes society. We control for only three inputs to simplify the analysis: population, water, and energy consumption. Potentially, this could be helpful for policymakers to tackle social problems and increase awareness within the inhabitants of the metropolitan ecosystem [19]. Population is important for several reasons. Firstly, as mentioned earlier, the growing urban population puts pressure on land and services. Secondly, climate risks and hazards are unevenly distributed and socially differentiated especially in cities where there are diverse populations, with different languages, culture background, age, sex, etc. [20]. Climate change injustice happens along ethnic, gender, class, and racial lines [21,22]. Thirdly, people must participate actively in reducing the impact of the ecological crisis in cities. They are fundamental stakeholders in front of risks of natural hazards in cities and special attention should be given to vulnerable population. Furthermore, it is important that people are properly sensitized, informed, and warned about risks and hazards. Finally, inequality leads to greater environmental degradation, and a more equitable distribution of power and resources would result in improved environmental quality [23,24]. The paper is organized as follows. In Section 1, we expose the main concepts and objectives of this work, which are framed within the 2030 UN goals. In Section 2, we explain our model and carry out a statistical analysis of data. Section 3 describes the main results we got from the analysis. In Section 4, we discuss results with the existing literature. Finally, we set out our main conclusions. Materials and Methods DEA is a non-parametric technique that evaluates the efficiency of each operational unit-called Decision Making Units (DMUs)-in the model and defines the operational targets or benchmarks of the inefficient ones. The concept of efficiency assesses the production capacity of the DMUs based on their available resources. The observed data defines the Production Possibility Set (PPS), known as technology, under different assumptions. There are three types of technology: Free Disposal Hull (FDH) considers free disposal of inputs and outputs; Variable Return of Scale (VRS) considers also lineal convexity of the observed DMUs; and Constant Return of Scale (CRS) comprises VRS technology and assumes that any observed operational unit can be scaled [25]. Then, the efficient frontier (EF) is a result of the subset of DMUs that performs best in the PPS. This subset dominates since no one can produce more outputs with a smaller amount of inputs. Each inefficient DMU is projected over the EF, thereby defining its benchmark. In each dimension, the distance of an inefficient DMU to the efficient frontier is called a slack (s). Those DMUs on the EF are efficient and, therefore, their slacks are zero. Inefficient DMUs have different benchmarks depending on the DEA model specification used defining as benchmarks, the most efficient operational units. The choice of DEA specification depends on the goal that the decision maker wants to analyze. For example, input-oriented models focus on reducing the amount of inputs, output-oriented models prioritize the increase the production, and non-oriented models reduce the inputs at the same time that increase the outputs. Alternatively, the benchmarks depend on the metric used (radial, directional distance function, slack-based, etc.). The first class of DEA models are radial models, like CCR [10]. These models project the DMUs over the EF, measuring the technical efficiency of each units. However, they overestimate the technical efficiency when the nonzero slacks are present. Charnes et al. [26] propose an additive model to curb this overestimation while maximizing the slacks of inputs and outputs at the same time. In DEA, multiple models can be used to measure the performance of the evaluated units. The additive model proposed by Charnes et al. [26] is able to discriminate between efficient and inefficient DMUs. However, the different properties between this model and the CCR model are explained by the different units that the sum of slacks of inputs and outputs follow. This could justify the use of other slack-based models like the Range Adjusted Model (RAM) developed by Cooper et al. [27], which normalize the slacks of inputs and outputs, and the Slack Based Measure (SBM) model which satisfies monotonicity and unit invariance with respect to slacks developed by Tone [28]. Later, Fukuyama and Weber [29] defined the Slack Based Inefficient (SBI) model to measure the technical inefficiency while considering all slack in the input and output constraint. The SBI model we use is related to the directional technology distance function [30,31] that seeks a maximum non-radial increment in outputs while reducing inputs for a given directional vector. Our model estimates data from 45 cities and controls for three inputs (population, water, and energy) to produce outputs (desirable and non-desirable). Figure 1 illustrates the control variables we incorporate in our model to evaluate the efficiency of the cities. Our inputs are population (number of people living in the city), water consumption (m 3 ), and energy consumption (MWh). The desirable output is gross domestic product (GDP) (measured in US Dollars), and the undesirable outputs are PM2.5 (measured in average level in µg/m 3 experienced by the population), CO 2 (thousands of equivalent CO 2 Tons), and wastewater (%). invariance with respect to slacks developed by Tone [28]. Later, Fukuyama and Weber [29] 167 Our model estimates data from 45 cities and controls for three inputs (population, water, and energy) 168 to produce outputs (desirable and non-desirable). Figure 177 We propose the utilization of the SBI Model to evaluate the efficiency of cities. SBI models are non- 178 oriented. This demands that the normalization of the slacks is performed with the observed values of 179 evaluated DMUs [32]. Indeed, this is what we analyze from an empirical approach. The non-oriented 180 feature of the SBI models reduces inputs and maximizes outputs at the same time. As SBI model assesses 181 the inefficiency, efficiency is obtained by subtracting the coefficient from SBI against 1. We apply this model 182 over each DMU, thereby maximizing the mean of their normalized slacks (1). 183 The analysis of efficiency assumes a set of observed DMUs { : = 1, . . , }, where each DMU We propose the utilization of the SBI Model to evaluate the efficiency of cities. SBI models are non-oriented. This demands that the normalization of the slacks is performed with the observed values of evaluated DMUs [32]. Indeed, this is what we analyze from an empirical approach. The non-oriented feature of the SBI models reduces inputs and maximizes outputs at the same time. As SBI model assesses the inefficiency, efficiency is obtained by subtracting the coefficient from SBI against 1. We apply this model over each DMU, thereby maximizing the mean of their normalized slacks (1). The analysis of efficiency assumes a set of n observed DMUs {DMU j : j = 1, .., n}, where each DMU needs m inputs (x) to produce s desirable outputs (y). However, the production of these desirable outputs creates w undesirable outputs y b , which are linked to the process. These undesirable outputs can be modeled under the assumption of weak disposability, implying that undesirable outputs can be reduced, but at a cost which will require the reduction of the production of desirable outputs [33]. The left part of constraints (2-4) define the efficient frontier based on DMUs, while the right part shows the slacks of the evaluated DMU x 0 , y 0 , y b 0 . The variables λ j , µ j represent the weak disposability assumption proposed by Kuosmanen [33]; and constraint (5) specifies that technology under Variable Returns to Scale (VRS). Thus, the EF is a lineal convexity of the observed DMUs. The reason why we set VRS is attributed to the higher discrimination assumed among DMUs than under CRS [15]. Therefore, not comparing the DMUs with other scaled DMUs offers a more realistic comparison. As a consequence, the EF under VRS technology has a higher number of DMUs than under CRS technology. s k=1 y k j λ j = y k0 + s + k k = 1, .., s In this model, all efficient DMUs have zero SBI value. Then, they are on the EF. Earlier, we emphasized that SBI model measures the inefficiency of DMUs. This inefficiency is defined as the average of the mean normalized slacks of the DMU grouped by inputs, undesirable outputs, and desirable outputs. Therefore, the efficiency of a DMU is determined by the parameter theta, which is calculated as follows: θ = 1 − SBI. A further insight can be taken from Equations (7)-(9) which measures the normalized slack of inputs, desirable outputs, and undesirable outputs, respectively. Descriptive Analysis In our analysis, we identify similar cities around the world with similar sizes. Indeed, DEA and the SBI model provide a neutral background to measure efficiency as the common goal and offer an excellent tool for a comprehensive evaluation of sustainability regardless of the different realities, climate, societies, and interactions amongst cities. This is our main hypothesis. We selected data from the OECD data repositories with additions from the World Council from City Data. This includes 45 cities, mostly from Europe but also covers the US, Chile, and Japan. We gathered information related to population, real GDP, air pollution measured in PM2.5, CO 2 footprint, energy and water consumption for each city. Table 1 summarizes the main descriptive statistics from the data source. In Table 1, Manchester has the largest number of inhabitants and Trondheim has the lowest number of inhabitants. Lyon consumes the highest volume of fresh water and Belfast consumes the lowest volume. Porto consumes the highest total energy in 2018 and Cartagena consumes the lowest. Portland is the richest city in terms of real GDP and Cartagena is the poorest. Cracow is the most polluted city (in PM2.5) in the sample and Portland is the least. San Antonio is the highest CO 2 emitter and Debrecen is the city with the lowest level of CO 2 emissions. Finally, Concepcion processes the highest level of urban wastewater and Cartagena possesses the lowest. We measure which dimensions each inefficient city should improve on to reach an efficient frontier based on available data. Regression Results Table 2 details our estimates from the SBI model. Coefficients report normalized slacks of each city for inputs (SBI X), desirable outputs (SBI Y), and undesirable outputs (SBI YB) and the overall efficiency indicator (θ) is shown in the last column. Almost half of the cities are efficient (20 cities). The remaining 25 cities are considered to be inefficient and only six of them (Hiroshima, Antwerp, The Hague, Nice, Lille and Bordeaux) have benchmarks that are able to produce more GDP than their current level. This means that the remaining inefficient cities could improve their performance by better managing their resources and reducing their undesirable outputs. Moreover, 56% of the inefficient DMUs have a higher normalized slack for the undesirable outputs than for the inputs. For the inefficient units, we computed the slacks to identify the inputs each inefficient city should change to be more efficient in the management of their available resources (Table 3). Overall, inefficient cities have margin of improvement if they reduce their water consumption since the mean and the median of this slack is 54.92% and 55.64%, respectively. For population and energy, the medians are 8.96% and 11.16%, respectively and the means are 11.76% and 16.56%, respectively. Vancouver, Hanover Linz, The Hague, Toulouse, Gothenburg, Tallinn, Utrecht, Antwerp, Rotterdam, Helsinki, Tampa-Pinellas, and Pittsburgh are the only inefficient cities that can reduce their energy consumption after comparing their performance with their benchmarks. The mean of their Energy Consumption Slacks is 0.17. In this subset of cities, only six of them (Gothenburg, Hanover, Helsinki, Vancouver, Toulouse, and Tampa-Pinellas) have as a benchmark a city with a lower population (Table 4). These six cities are the only ones that have input slacks for the three inputs. Regarding the undesirable outputs, wastewater is the variable with the lowest values of slacks (0.26 as the median and 0.28 as the mean), while CO 2 is the variable with the highest slacks of all the undesirable outputs (0.42 as the median and 0.43 as the mean). DEA allows observing which are the benchmarks for any inefficient unit and the observed efficient cities that define those benchmarks. There are 10 cities (Bilbao, Bologna, Cracow, Florence, Glasgow, Lyon, Manchester, Trondheim, Turin, and Turku) that are efficient but do not act as a benchmark for any inefficient unit, since they are outliers. On the contrary, Aarhus, Belfast, Cartagena, Copenhagen, Cork, Debrecen, Marbella, Portland, Porto, and Zurich are peers, efficient cities that define the benchmarks for those inefficient units. Copenhagen, Debrecen, and Zurich are the most used efficient cities to define the targets of the database. Table 5 relates the influence of each efficient city (benchmark) over the inefficient cities. We show inefficient cities in rows and the efficient cities (benchmarks) that act as peers for any of the inefficient city in this dataset in columns. Table 5 shows estimates of λ j and λ j + µ j , which define the benchmarks. The variable λ j searches for the peers in constraints (3)(4) for the desirable and undesirable outputs due to weak disposability assumption, while the sum λ j + µ j tracks down the peers in constraint (2) for the inputs. The coefficient λ j takes zero value for all the inefficient units when they are using Debrecen as peer, except for Concepcion which has µ j = 0. This means that Concepcion has Debrecen as a peer due to its production level of desirable and undesirable outputs for its consumption of inputs, while for the rest of the inefficient cities that have Debrecen as a peer, its expertise as resource manager is a reference for them. On the contrary, Zurich acts as peer for most of the inefficient units not only for their resource management, but for their level of production too. Discussion In recent years, DEA has been widely used to assess urban sustainability [34][35][36]. From an empirical approach, the use of DEA in urban contexts assesses the performance of the cities comprising all the potential dimensions related to sustainability. DEA can be used in benchmarking, target setting, measuring returns to scale, measuring congestion, etc. Because of the capabilities of DEA models in evaluating and ranking DMUs [37]. Despite DEA being an excellent tool to guide policy makers to improve social and urban sustainability [13], it is important to acknowledge that it has to be used carefully and researchers must be aware of its limitations and strengths. From an urban policy perspective that includes the analysis of KPIs, this paper comprehensively addresses one of the most important dilemmas in the assessment of urban sustainability. We benefit from the SBI model and assumed Variable Returns to Scale (VRS) in the production function (technology). This helps us to evaluate the performance of the cities in a more realistic way, contrary to most of existing empirical evidence [17]. A similar approach has been applied to the integrated sustainability performance assessment of Universities (based on UI GreenMetric ranking) by Puertas and Marti [38]. The result is that almost half of the cities of our sample are efficient. Half of the efficient cities perform as a benchmark for the inefficient units and the other half are outliers. Therefore, inefficient cities do not take any of those outliers as peers (benchmarks). Looking more closely at the information included in Table 4, any city can observe what are their benchmarks (and so replicate relevant policies) for any inefficient unit and the observed efficient cities that define those benchmarks. Overall efficiency is not relevant for decision makers if the city does not act as a benchmark for any inefficient unit (when efficient cities are outliers). On the other hand, Aarhus, Belfast, Cartagena, Copenhagen, Cork, Debrecen, Marbella, Portland, Porto, and Zurich are peers that define the benchmarks for the inefficient cities. Copenhagen, Debrecen and Zurich are the most commonly used efficient cities to define the targets for their independent benchmarks. From a practical point of view, any city incorporated to the database can obtain a diagnosis first on its relative efficiency, and then on specific benchmarks from peers for setting their future (optimal) policy goals. When a city looks at its relative ranking position from the SBI efficiency model (Table 2). Later, a detailed comparison of their relative slacks and corresponding benchmarks provides information what policies have to be addressed for an optimal result on the given city. In sum, slacks inform about which variables inefficient cities should reduce (in the case of inputs or undesirable outputs) or which variables they should increase (GDP) in order to improve their performance and become more efficient cities. Bigger slacks mean that the efforts that cities should carry out are bigger in that variable. This paper is not exempted from some limitations. First, as the model obtained with the use of DEA and SBI is the best among the possible models with the available data, the restricted available data we have coped with mean that we still can improve our model a great deal. We are committed to looking for more data in order to make a better and more complex model which reflects the reality of cities meaningfully. Second, the use of DEA and SBI invariably leads to a specific final model. In our work, SBI model searches for the maximum distance of the inefficient cities to the EF, thereby reducing their resources involving undesirable outputs whilst increasing their desirable outputs. Other models define a fixed direction of all the DMUs to be projected over the EF, while in our model each DMU follows their own direction. Third, in this paper we have focused on a concrete period for the assessment of the efficiency of the cities. We consider that a multiple period analysis could be an interesting further research area to carry on. Conclusions This paper evaluates the performance of cities utilizing the SBI model to guide that process. While this model has been tested in multiple applications, we have found none in the context of sustainability, and we are therefore excited to present our results in this forum. Apart from that, the application of this model using both desirable and undesirable outputs following the weak disposability assumption represents an excellent opportunity to give proper feedback to cities. Cities can benefit from this analysis to enhance their performance, even though there are evident limitations due to the DEA methodology, available data, and the fact that the goal proposed in the model affects the search of the benchmarks. Looking closely at the influence of each efficient city (benchmark) over the inefficient cities (Table 5) specific policies from benchmarked cities can be monitored to ascertain their relevance on the measurement of efficiency for each city. Since all the selected cities were gathered in the data under the same standard (ISO 37120) and have similar population size, policies can be followed up to improve the decision-making processes. The effect of specific urban policies can be explored by simulating the future evolution of inputs and outputs on this model, allowing insight into the overall effect of city decisions on the most efficient result for cities' future. Later developments could include exploring simulations on KPI evolution for verifying reasonable performance.
5,601.4
2020-03-26T00:00:00.000
[ "Economics" ]
Confidence regions for neutrino oscillation parameters from double-Chooz data In this work, an independent and detailed statistical analysis of the double-Chooz experiment is performed. To have a thorough understanding of the implications of the double-Chooz data on both oscillation parameters $\sin^{2}(2\theta_{13})$ and $\Delta m^2_{31}$, we decided to analyze the data corresponding to the Far detector, with no additional restriction. By doing this, confidence regions and best fit values are obtained for ($\sin^{2}(2\theta_{13}),\Delta m^2_{31}$). This analysis yields an out-of-order $\Delta m^2_{31}$ minimum, which has already been mentioned in previous works, and it is corrected with the inclusion of additional restrictions. With such restrictions it is obtained that $\sin ^{2}(2 \theta _{13})=0{.}084_{-0{.}028}^{+0{.}030}$ and $\Delta m^2_{31}=2.444^{+0.187}_{-0.215} \times 10^{-3}$ eV$^2$/c$^4$. Our analysis allows us to study the effects of the so called"spectral bump"around 5 MeV, it is observed that a variation of this spectral bump may be able to move the $\Delta m^2_{31}$ best fit value, in such a way that $\Delta m^2_{31}$ takes the order of magnitude of the MINOS value. Finally, and with the intention of understanding the effects of the preliminary Near detector data, we performed two different analyses, aiming to eliminate the effects of the energy bump. As a consequence, it is found that unlike the Far Detector analysis, the Near detector data may be able to fully determine both oscillation parameters by itself, resulting in, $\sin^2(2\theta_{13}) = 0.095 \pm 0.053$ and $\Delta m^{2}_{31} = 2.63^{+0.98}_{-1.15} \times 10^{-3} \text{eV}^2 / \text{c}^4$. The later analyses represent an improvement with respect to previous works, where additional constraints for $\Delta m^2_{31}$ were necessary. In this work, an independent and detailed statistical analysis of the double-Chooz experiment is performed. In order to have a thorough understanding of the implications of the double-Chooz data on both oscillation parameters sin 2 ð2θ 13 Þ and Δm 2 31 , we decided to analyze the data corresponding to the Far detector, with no additional restriction. This differs from previous analyses, which only aim to estimate the mixing angle θ 13 , without mentioning the effects on Δm 2 31 . By doing this, confidence regions and best fit values are obtained for (sin 2 ð2θ 13 Þ; Δm 2 31 ). This analysis yields an out-of-order Δm 2 31 minimum, which has already been mentioned in previous works, and it is corrected with the inclusion of additional restrictions. With such restrictions it is obtained that sin 2 ð2θ 13 Þ ¼ 0.084 þ0.030 −0.028 and Δm 2 31 ¼ 2.444 þ0.187 −0.215 × 10 −3 eV 2 =c 4 . Our analysis allows us to study the effects of the so-called "spectral bump" around 5 MeV; it is observed that a variation of this spectral bump may be able to move the Δm 2 31 best fit value, in such a way that Δm 2 31 takes the order of magnitude of the MINOS value. In other words, if we allow the variation of the spectral bump, then we may be able to determine both oscillation parameters using Far detector data only, with no further restrictions from other experiments. Finally, and with the intention of understanding the effects of the preliminary Near detector data, we performed two different analyses, aiming to eliminate the effects of the energy bump. As a consequence, it is found that unlike the Far detector analysis, the Near detector data may be able to fully determine both oscillation parameters by itself, resulting in sin 2 ð2θ 13 Þ ¼ 0.095 AE 0.053 and Δm 2 31 ¼ 2.63 þ0.98 −1.15 × 10 −3 eV 2 =c 4 . The later analyses represent an improvement with respect to previous works, where additional constraints for Δm 2 31 were necessary. DOI: 10.1103/PhysRevD.97.093005 I. INTRODUCTION The double-Chooz experiment estimated the reactor neutrino flux of the Chooz-B nuclear plant by means of its operating parameters. This flux, when interacting with the detector target, induces a number of inverse β decays (IBD). This experiment was designed to run with two detectors located at L F ≈ 1000 m (Far), and L N ≈ 400 m (Near). But the current collaboration results report only far observations. The double-Chooz Near detector was finished in 2016, and only preliminary data have been published until now. In particular, the double-Chooz Far detector reports fewer IBDs than those expected. If a neutrino oscillations model explains this deficit, then the oscillation parameters can be obtained from double-Chooz data. In the simplified two-flavor oscillation model, the survival probability of aν e with energy E ν ðMeVÞ after traveling a distance LðmÞ is given as The main objective of the double-Chooz experiment is the precise measurement of the mixing angle θ 13 [1]. Table I shows some results for sin 2 ð2θ 13 Þ. In particular, the double-Chooz collaboration determined sin 2 ð2θ 13 Þ ¼ 0.090 þ0.032 −0.029 , without showing confidence regions and using the value obtained by MINOS of 2.44 þ0.09 −0.10 ×10 −3 eV 2 =c 4 for Δm 2 31 [2]. In this paper we use the double-Chooz data in χ 2 tests to determine the best fit for sin 2 ð2θ 13 Þ and Δm 2 31 without assuming an a priori value for the last one to get both parameters as well as their confidence regions. Consequently, double-Chooz data may be used for a unified analysis with other experiments where both oscillation parameters are obtained simultaneously by means of their corresponding data. The organization of this work is as follows. In Sec. I the rate þ shape (R þ S) analysis is presented, aligned with Ref. [2]. The statistics defined there is used for Far detector analysis only. This section shows how the quantities that define the function χ 2 RþS are obtained. A consistent definition of the expected number of IBD is introduced in Sec. II. Both sections are used in Sec. III to estimate the oscillation parameters sin 2 ð2θ 13 Þ and Δm 2 31 by minimizing the function χ 2 RþS . Also the confidence regions for these parameters are obtained from a diagonal covariance matrix (DCM), and a full covariance matrix (FCM). The values obtained for Δm 2 31 are significantly different than those expected. In order to line up our results with the MINOS experiment, a χ 2 RþSþM test is presented. In Sec. IV a discussion related to the spectral bump of the neutrino spectrum at 5 MeV is included to estimate their effect in the χ 2 RþS function. Section V is devoted to the Far þ Near detector data. With the purpose of determining the oscillation parameters Δm 2 31 , sin 2 ðθ 13 Þ and their confidence regions from double-Chooz data without using a priori the value of Δm 2 31 from another experiment, two convenient χ 2 functions are introduced. These statistics also suppress the spectral bump effects mentioned before. To do this, we use the preliminary data from [8] as input to the formalism presented in the previous sections. The results obtained are promising and can be used when the collaboration releases new data. Finally, our conclusions are given in Sec. VI. II. RATE + SHAPE ANALYSIS [2] Neutrinos are detected through the positron kinetic energy, E vis , in the energy range of 0.5 and 20 MeV, which is divided into 40 energy bins, accordingly to Table 15.2 in [9]. The rate þ shape analysis is determined by the function In the first term, each energy bin requires N obs i (N exp i ), which is the observed (expected) number of IBD. A covariance matrix M ij is introduced to include the correlation terms among energy bins. N obs i were directly obtained from Fig. 21 in [2]. In this analysis we considered the 17 351 IBD candidates, N tot , that occurred during 460.67 days, T on . The first thirty-one readings are consistent with previous double-Chooz collaboration data [10]. N exp i is proportional to the expected number of antineutrinos without oscillations, n exp i , and to the average survival probability in the ith energy bin,P i , closely related to the flavor-oscillation model (1). In the following section the explicit form of these quantities is given. For now we can write N exp i as follows: The diagonal matrix elements M ij contain information on statistical and systematic uncertainties in each energy bin. The bin-to-bin correlations correspond to the off diagonal elements. M ij is discussed in more detail in [2] and is written as The matrix elements M ij have been taken from Fig. 15.3 in [9]. In this figure there are five matrices that define the M ij numerical value. Since the information is presented through a color code, a basic software was necessary to decode it. We verified that the diagonal elements M stat ii , M flux ii , and M eff ii previously published in [10] were found in the matrices of [9]. In addition to the uncertainties involved in the covariance matrix, eight systematic uncertainties are considered in the second and third terms of χ 2 RþS , using ϵ x parameters. 0.109 AE 0.030 a AE 0.025 b Abe et al. [5] 0.097 AE 0.034 a AE 0.034 b Abe et al. [6] 0.102 AE 0.028 a AE 0.033 b Novella [7] 0.102 AE 0.043 c Abe et al. [2] 0.090 þ0.032 Three of them are the coefficients of a polynomial for the visible energy variation, These are explicitly introduced in the second term of Eq. (2) by means of a matrix that contains uncertainties σ a , σ b , σ c and correlations ρ ab , ρ bc , and ρ ac ( Table II). The other sources of systematic uncertainties are considered through five parameters ϵ k , whose standard deviations are given in Table III. The fourth term in (2) or χ 2 off is the contribution of two reactors off (2-off) data, in which N obs off (N exp off ) is the observed (expected) number of IBD candidates. According to Ref. [2], N obs off ¼ 7 and N exp off is determined by the residualν e 's (ϵ 4 ), and the background (n bg ), as where n bg ¼ B · T off . B ¼ 1.56 events=day is the total background rate provided by T off ¼ 7.24 days of reactor-off data, [9]. For IBD events with neutrons captured on Gd, P off ðθ 13 ; Δm 2 31 Þ denotes the average survival probability on all the spectrum of antineutrinos with the reactor off, and is written as [9,11], as suggested by Eq. (1).L ¼ 1050 m is the average distance from the Far detector to both reactors. The term in angle brackets results from averaging the survival probability (1) over all the energy range, 0.5 MeV ≤ E vis ≤ 20.0 MeV, is the energy of the incomingν e written in terms of the corrected visible energy or positron kinetic energy, E vis . The last term of (9) results directly from the observation of the IBD, and depends on the positron and nucleons masses, III. EXPECTED NUMBER OF IBD, N exp i As introduced in Eq. (3), the expected number of IBD, N exp i , is closely related to the oscillation model and is defined as In this equation the expected neutrino spectrum without oscillations, n exp i , depends on the operating parameters of the nuclear reactor. These quantities are obtained from Fig. 21 at [2]; a discrepancy between the observed and the expected neutrino spectrum without oscillations between 4 and 6 MeV has been detected. This energy bump will impact the determination of the oscillation parameters. It is discussed in the next section. The residual neutrinos ϵ 4 are produced by the radioactive elements in the core of the nuclear reactors, even when they are turned off. The term has been included to take into account this contribution to the spectrum. Both types of events are influenced by the same average oscillation probability over each energy bin,P i ðθ 13 ; Δm 2 31 Þ. Additionally, the three background sources, ϵ 1 , ϵ 2 , and ϵ 3 , mentioned in the description of Table III, are taken into account in Eq. (10). [2]. (2). The pull parameters ϵ 1 ; …; ϵ 5 are corrections to the predicted antineutrino spectrum. These are as follows: ϵ 1 , antineutrino spectrum error due to β decays of 8 He and 9 Li; ϵ 2 , error due to n þ μ; ϵ 3 , accidentals; ϵ 4 , residuals; ϵ 5 , uncertainty of the squared mass differences Δm 2 31 . The last one is removed later, [2]. Because we use the double-Chooz data to determine the best fit for sin 2 ð2θ 13 Þ and Δm 2 31 without assuming an a priori value for Δm 2 31 , the pull associated with the correction to this parameter, ϵ 5 , is not required anymore. Therefore, the χ 2 statistics (2) is a multiparametric function made of two oscillation parameters and seven pulls, The Δχ 2 RþS function is defined as the difference between the χ 2 RþS function and its absolute minimum. IV. CONFIDENCE REGIONS OF OSCILLATION PARAMETERS θ 13 AND Δm 2 WITH FAR DATA ONLY We report the minimization of the function χ 2 RþS and its level curves considering the FCM M in Fig. 1. Figure 2 takes into account only the diagonal elements of the covariance matrix M (DCM). In both plots several local minimums are shown. The absolute minimum (black star) in Fig. 1 has coordinates (0.087, 27.043×10 −3 eV 2 =c 4 ) in the sin 2 ð2θ 13 Þ− Δm 2 31 plane and the χ 2 RþS value of 37.17 for the FCM analysis. Another local minimum (white star) is (0.090, 2.512 × 10 −3 eV 2 =c 4 ) and its χ 2 RþS value is 41.83. These points share close values for sin 2 ð2θ 13 Þ. Nevertheless their corresponding values for Δm 2 31 are significantly different. This effect is closely related to the existence of the energy bump of the neutrino spectrum. These points are listed in Table IV. The absolute minimum obtained implies Δm 2 31 ¼ 27.043 × 10 −3 eV 2 =c 4 . This value is 1 order of magnitude higher than those reported elsewhere. However, given the quasiperiodic nature of the χ 2 RþS as a function of Δm 2 31 , it can be argued that any one of the local minimums may be the right one, and then additional experimental data and/or improved models would be needed to discriminate between minimums. For this reason, in Sec. V we introduce value. Nevertheless, it is remarkable that a local minimum appears at sin 2 2θ 13 ¼ 0.090, Δm 2 31 ¼ 2.512 × 10 −3 eV 2 =c 4 ; this minimum is included in the confidence region and is consistent with the Δm 2 31 value given by MINOS [12] (see Table IV). The plane sin 2 2θ 13 ¼ 0.087 is indicated with a red vertical line. The intersection of this plane with the χ 2 RþS level curves has been plotted in Fig. 4. FIG. 2. Behavior of the χ 2 RþS statistics for DCM analysis. The confidence region up to 90% of C.L. for (sin 2 2θ 13 , Δm 2 31 ) shows two disjoint regions. The best fit is found at sin 2 2θ 13 ¼ 0.091, Δm 2 31 ¼ 27.043 × 10 −3 eV 2 =c 4 , again, an inconsistent Δm 2 31 value. This time, the local minimum appears at sin 2 2θ 13 ¼ 0.085, Δm 2 31 ¼ 2.422 × 10 −3 eV 2 =c 4 and still belongs to the confidence region. This local minimum remains consistent with the Δm 2 31 value given by MINOS [12] (see Table IV). A discrimination criterion is needed. The plane sin 2 2θ 13 ¼ 0.091 is indicated with a red vertical line. The intersection of this plane with the χ 2 RþS level curves has been plotted in Fig. 4. data-data analyses that allow us the elimination of the energy bump effect. To do this, Near detector data are required. The simplest discrimination criterion is that which allows us to establish the value Δm 2 31 , closest to that obtained in another experiment. Consistently with [4], we introduce to the function (2) the additional term where Δm 2 MINOS ¼ 2.44 × 10 −3 eV 2 =c 4 , and σ MINOS is the average of the Δm 2 31 uncertainties reported by MINOS, Table IV. So, by minimizing Table V. The confidence regions generated from the χ 2 RþSþM statistics are consistent with those published in [9] and shown in Fig. 3. Figure 4 shows Δχ 2 RþS as a function of Δm 2 13 where sin 2 ð2θ 13 Þ has been fixed at 0.087 and 0.091. These values correspond to the sin 2 ð2θ 13 Þ coordinate of the absolute minimum obtained from the FCM and DCM analyses, respectively (Table IV). A succession of Δχ 2 RþS local minimums appear and are denoted as V. SPECTRAL BUMP EFFECTS We can note that the jth minimum has the Δm 2 31 j j coordinate; then In the DCM analysis, the separation between two consecutive minimums of Fig. 4 is given as (14). As a consequence of the addition of χ 2 MINOS to the statistics, the absolute minimum is discarded. In this way, the best fit is found at sin 2 2θ 13 ¼ 0.092, Δm 2 31 ¼ 2.444 × 10 −3 eV 2 =c 4 for FCM analysis, and sin 2 2θ 13 ¼ 0.084, Δm 2 31 ¼ 2.444 × 10 −3 eV 2 =c 4 for DCM analysis. The wider region corresponds to the full analysis, and therefore, this one has greater uncertainties (see Table V). For comparison purposes we introduce the Daya Bay data for the parameters sin 2 2θ 13 and Δm 2 31 up to 95.45% of C.L. [13]. All the analyses are consistent to each other. Besides, we can identify the absolute minimum with χ 2 m 3 ¼ 0 at Δm 2 31 ¼ 27.043 × 10 −3 eV 2 =c 4 and χ 2 m 1 ¼ 3.27, with Δm 2 31 ¼ 2.422 × 10 −3 eV 2 =c 4 , which is closer to the currently accepted Δm 2 31 value. A weighted average of the neutrino energy can be defined asĒ where ω i is the percentage of the observed IBD in each energy bin. Substituting this value into the term sin 2 ð1.27Δm 2 31L =Ē ν Þ, we found that it vanishes when Δm 2 31 ¼ 0.012 eV 2 =c 4 . This value is approximately equal to the average of λ 1 , λ 2 , and λ 3 . The small variation of these values may be attributed to the complex dependence of the χ 2 RþS on the squared sine function and to the average value ofĒ ν used. The Far detector results alone can be used to discuss how the spectral bump around 5 MeV in the neutrino spectrum affects the Δm 2 31 fit and how the distribution of χ 2 m j might change. RþS profiles when a hypothetical source of rector neutrinos η is added to the prediction in the energy bump, when the unknown source is 5% (black solid line), 10% (red dashed line), and 20% (blue short dashed line) of the total prediction and their respective sin 2 ð2θ 13 Þ best fit. The oscillatory behavior of the χ 2 functions remains, but the Δm 2 31 best fit adopts different values: 2.73 × 10 −2 eV 2 =c 4 at ξ ¼ 5%, 1.79 × 10 −1 eV 2 =c 4 at ξ ¼ 10%, and 8.20 × 10 −3 eV 2 =c 4 at ξ ¼ 20%. The Δm 2 31 best fit is sensitive to the energy bump changes. The origin of the energy bump is still undetermined, as discussed by [2,14]. The distortion seems to be hardly correlated to the reactor flux. This hypothesis was tested by the double-Chooz collaboration finding that the number of reactors on has influence on the distortion rate. The pattern of minimums χ 2 m j is sensitive to the energy bump changes. As an example, we introduce a hypothetical source of rector neutrinos, given by η i ≡ ξn exp i added to the prediction in the energy bump. As a consequence, the oscillatory behavior of the χ 2 functions remains, but the Δm 2 31 can diverge 2 order of magnitude higher than expected or even fall into the order set by MINOS Fig. 5. Note that when ξ ¼ 5%, the Δm 2 31 is still 1 order of magnitude higher than expected, and diverges 2 orders of magnitude when ξ ¼ 10%, but when ξ ¼ 20% the difference of squared masses falls into 8.2 × 10 −3 eV 2 =c 4 . Thus, the effect of the spectrum distortion is relevant but its source is unknown. If we want to obtain both parameters simultaneously, it is necessary to change our point of view, suppressing the energy bump as is discussed in the next section. This is encouraging to perform a unified analysis with other experiments. VI. CONFIDENCE REGIONS OF OSCILLATION PARAMETERS θ 13 AND Δm 2 31 WITH FAR + NEAR DATA Although the Near detector was built in 2016, only preliminary results have been published [8]. These preliminary double-Chooz two-detector results can be used as input to the formalism presented in Secs. II and III. A direct comparison between two sets of data (a data-data analysis) has been considered to cancel the spectrum distortion for the determination of the oscillation parameters. In order to perform a data-data analysis we are restricted to compare only the Far II to the Near data from [8]. In particular, we propose a χ 2 ð1Þ statistics defined as where N obs Far=Near are the observed number of IBD candidates at the Far/Near detector in the bin with energy E vis i ; Ω i is a weight factor, andP i ðθ 13 ; Δm 2 31 ; L Far=Near Þ is the averaged survival probability over each energy bin at the Far/Near detector. This statistics suppresses the use of the prediction of the unoscillated reactor neutrino signal spectrum n exp Near=Far at the Near/Far detector. Another way to define a data-data analysis is Table VI. By means of the data-data analyses, the influence of the spectral distortion for the Δm 2 31 determination is highly suppressed. Figure 6 shows the 68.27%, 90%, and 95.45% C.L. regions. Three main points are remarkable. (i) Data-data analyses no longer show two disjoint regions as the χ 2 RþS in Sec. III, and best fit. Through data-data analyses the spectral bump effect in the Δm 2 31 determination is highly suppressed. Even when the oscillatory behavior is still present, the Δm 2 31 is fully defined now; this is shown in Fig. 7. (ii) Δm 2 31 does not differ by 1 or more orders of magnitude with respect to MINOS, (iii) χ 2 ð1Þ and χ 2 ð2Þ do not depend on external information, as χ 2 RþSþM . Even when both χ 2 ð1Þ and χ 2 ð2Þ statistics still have the oscillatory behavior in Δm 2 31 , there is a well-defined difference between the absolute minimum and the local minimums, as can be seen in Fig. 7. It is important to recall that in this section, preliminary data from [8] were used. In fact, the formalism described in Secs. II and III, in addition to the data-data statistics of this section, can be used to analyze the double-Chooz twodetector data to determine both sin 2 ð2θ 13 Þ and Δm 2 31 without any restrictions from other experiments. This work represents a useful tool to build a unified analysis of double-Chooz, Daya Bay, and RENO, as suggested in [15] and [16], even without solving the spectrum bump problem. VII. CONCLUSIONS The proposed χ 2 RþS statistical analysis yields consistent results to those published by the double-Chooz collaboration using Far data. The approach followed allows us to generate the confidence regions for the oscillation parameters Δm 2 31 and sin 2 ð2θ 13 Þ shown in Figs. 1 and 2. The effect of the nondiagonal elements of the covariance matrix on the oscillation parameters can be compared with DCM analysis, which is in some sense a zero statistical error case based on the double-Chooz Far detector analysis only. It is observed that in the FCM analysis the confidence regions are wider in sin 2 ð2θ 31 Þ, and therefore have greater uncertainties. Each one of the FCM and DCM analyses reports the existence of a χ 2 RþS absolute minimum, corresponding to a Δm 2 31 value, which is inconsistent with the MINOS Δm 2 31 value. In fact, the double-Chooz Far data do not provide by themselves enough evidence to perform a squared mass difference Δm 2 31 estimation with the data available before 2016. However the first local minimum agrees with MINOS Δm 2 31 value, as shown in Fig. 4. In order to force the first local minimum to become the absolute minimum we introduced the additional term (13), in χ 2 RþS . Hence by minimizing the χ 2 RþSþM we got the best fit parameters, sin 2 ð2θ 13 Þ ¼ 0.084 þ0.030 −0.028 and Δm 2 31 ¼ 2.444 þ0.187 −0.215 × 10 −3 eV 2 =c 4 , as can be seen in Table V. In Fig. 3 we have established the confidence regions for neutrino oscillation parameters θ 13 and Δm 2 31 from double-Chooz Far data. In Sec. IV we have introduced a hypothetical source of rector neutrinos to show how the spectrum distortion affects the oscillatory behavior of the χ 2 functions and the Δm 2 31 value. We found that a correction of 20% in the expected spectrum distortion, independently of its source, corrects the order of magnitude of Δm 2 31 , as indicated in Fig. 5. To cancel the spectrum distortion in the determination of the oscillation parameters, we performed two data-data analyses, using preliminary two-detector data. In both cases, the Δm 2 31 values obtained are not so different than those currently accepted by the community as shown in Figs. 6 and 7 and Table VI. Data-data analyses no longer show two disjoint regions as the χ 2 RþS in Sec. III. Also the value of Δm 2 31 found does not differ by 1 or more orders of magnitude with respect to MINOS and the whole analysis is independent of external information. to their respective sin 2 ð2θ 13 Þ best fit. The spectral bump effect in the Δm 2 31 determination is highly suppressed by means of data-data analyses. The oscillatory behavior is still present, but the Δm 2 31 is fully defined now and agrees with the currently accepted value for this parameter. The horizontal lines at Δχ 2 RþS ¼ 2.3, 4.61, and 6.18 represent the 68.27%, 90.0%, and 95.45% C.L., respectively. Notice that only the absolute minimum falls into these regions and it is near the expected one. The formalism described in Secs. II and III in addition to the data-data statistics from Sec. V can be used to analyze the two-detector double-Chooz data to determine both sin 2 ð2θ 13 Þ and Δm 2 31 without any restrictions from other experiments. In this way, this work extends the facilities of the double-Chooz experiment by allowing us to measure two oscillation parameters, Δm 2 31 and sin 2 ð2θ 13 Þ. This work might contain elements of a future unified analysis with other experiments, such as Daya Bay and RENO even with the spectrum bump problem. ACKNOWLEDGMENTS B. V. P. acknowledges the Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, for the hospitality during his PhD studies in sciences. We also thank the kind referee for the positive and invaluable suggestions that have improved the manuscript greatly. Special thanks go to Karla Rosita Téllez Girón Flores for her suggestions. This work was partially supported by COFAA-IPN, Grants No. SIP20180062 and No. SIP20170031 IPN and the Consejo Nacional de Ciencia y Tecnilogía through the SNI-México.
6,673.8
2017-12-15T00:00:00.000
[ "Physics" ]
Apparent coordinated and communal hunting behaviours by Erabu sea krait Laticauda semifactiata Opportunistic observations of Erabu sea kraits (Laticauda semifaciata) provide evidence that this species undertake a novel foraging tactic; coordinated communal hunting. Erabu sea kraits prey on cryptic fish species in highly complex reef habitats. Intra- and interspecific cooperative hunting strategies may increase chances for all members of the hunting party to encounter and capture prey in these complex habitats. Here, we observed 52 instances of communal hunting by Erabu sea kraits with conspecifics and other predatory fishes at recreational dive sites in Southern Lombok, Indonesia. These observations highlight the potential higher cognitive capacity of sea kraits to coordinate activities around communal hunting events. Apparent coordinated and communal hunting behaviours by Erabu sea krait Laticauda semifactiata Ruchira Somaweera 1,2* , Vinay Udyawer 3 , A. A. Thasun Amarasinghe 4 , Joe de Fresnes 5 , Jay Catherall 6 & Galina Molchanova 6 Opportunistic observations of Erabu sea kraits (Laticauda semifaciata) provide evidence that this species undertake a novel foraging tactic; coordinated communal hunting.Erabu sea kraits prey on cryptic fish species in highly complex reef habitats.Intra-and interspecific cooperative hunting strategies may increase chances for all members of the hunting party to encounter and capture prey in these complex habitats.Here, we observed 52 instances of communal hunting by Erabu sea kraits with conspecifics and other predatory fishes at recreational dive sites in Southern Lombok, Indonesia.These observations highlight the potential higher cognitive capacity of sea kraits to coordinate activities around communal hunting events. Coordinated and communal hunting by a group of predators involves each individual within that group synchronising their actions in time and space to increase successful prey capture 1 .These behaviours have recently gained considerable attention as such coordination requires elevated cognitive demands.Role differentiated, coordinated and communal hunting is rare among animals, and are only known from a handful of mammals including primates, canines, hyenas, felines and cetaceans 2,3 , birds including raptors and corvids 4 and fish including moray eels and trout 5,6 .Very limited instances of opportunistic, coordinated and communal hunting however has also been recorded in reptiles, including crocodilians 7 , varanids 8 , and a single case of a snake 9 .In the single observational study, Cuban boas (Chilabothrus angulifer) were suggested to take the positions of other individuals into account when choosing the hunting location for fruit bats, improving the effectiveness of the hunt.Here we describe possible coordinated communal hunting behaviour of a less-known marine snake, the Erabu sea krait. Erabu or Chinese sea kraits (Laticauda semifasciata) are typically distributed in the tropical and subtropical waters around Japan, China including Taiwan, South Korea, Philippines and Indonesia 10 .It is possibly the most aquatic of all the amphibious sea kraits within the Laticaudid genera and live a nearly fully aquatic life except when laying eggs.Day time observations of this species during previous surveys have been in deep water (depth not specified but sampling undertaken on SCUBA), however, they seem to move to intertidal areas and coastal tidal caves at night time 11 , but always remain submerged 12 .This species relies on freshwater seepage and heavy rain for freshwater supply 13 and availability of freshwater sources may be a determination of their distribution 14 .Despite the highly mobile nature of other members of their genera and their capacity to regularly move between neighbouring islands 15,16 , Erabu sea kraits shows distinctive genetic structure between island groups within their range 17 . Specimen dissection studies at a local scale have shed some insight to the diet of Erabu sea kraits.At Orchid Island in Taiwan, stomach contents from 73 specimens recorded 16 fish families 11 .Hatchlings only ate fish of Mugiloididae, while subadult and mature kraits fed mainly on the Emmelichthyidae, Acanthuridae, and Pomacentridae, with mature males showing a wider range of food items (15 families) than adult females (6 families).Additionally, Bacolod 18 briefly described females always having eels and other types of fish in their stomachs while Pickwell 19 observed them reacting to smell of killifish (Fundulus parvipinnis) and mud suckers (Gillichthys mirabilis) under captive conditions.The cryptobenthic and reef-associated nature of the majority of preferred prey of Erabu sea kraits means that foraging and prey capture success is likely influenced by the complexity of habitat 20 , with lower capture success in highly structured and complex reef systems.In these instances, inter-and intra-specific coordinated group hunting likely increase the chances of prey capture 21 .Here we use opportunistic field observations, anecdotal records, and historical video records to describe apparent cooperative hunting by Erabu sea kraits, potentially used to increase prey capture success. Methods Observations of foraging Erabu sea kraits were made during 32 recreational dives at 'the Cathedral' (− 8.899757°, 116.073019°) in Belongas, south of Lombok Island in Indonesia.The site, located 650 m off the closest shore, comprises a tall pinnacle surrounded by boulders reaching ~ 55 m in depth.It experiences high waves and strong currents with a surface water temperature of 27-30 and 25-27 °C at the bottom throughout the year. Deep-diving certified scuba divers in groups of two to six visited the site 32 times between September 2018 and July 2022 during recreational dive tours.Once encountered, individuals of Erabu sea kraits were opportunistically observed from a distance of ~ 3 m (unless the kraits approach the divers) and followed for 5-20 min durations based on the dive conditions and behaviour of the kraits (e.g., when kraits swam away from the reef, or into depths divers were unable to follow).On some occasions, encounters were filmed with GoPro cameras without external lights.This work purely comprises a collation of opportunistic observations without interference made during recreational diving tours, therefore no animal ethics approval was obtained specifically for the work.However, standard guidelines for responsible animal interactions while scuba diving was strictly followed.As kraits were encountered during pre-planned recreational deep dives (> 20 m), depth and time restrictions meant that observers could not follow the groups of kraits through the full sequence of hunting behaviours in one dive.Nevertheless, repeated behaviours were noted across all observations and used to develop an ethogram highlighting key behaviours exhibited during apparent communal hunting events. Observations Erabu sea kraits were encountered on at least 52 separate occasions, during all 32 dives.Of the 52 observations 12 were filmed, with the majority being anecdotal records (observed by JC and GM during recreational dives).The number of individuals per dive ranged from two individuals on 21 June 2022 to 21 individuals on 5 October 2019, and included individuals ~ 100 to ~ 130 cm in estimated total length.All encounters were between 23 and 45 m in depth.Common behaviours recorded during the encounters included at least 11 unique behaviours (Table 1).Due to the anecdotal nature of most observations, detailed time-budgets for the full sequence of behaviours could not be collected, however descriptions of components of interspecific and intraspecific communal hunting are provided here.Forty-three separate encounters were of swimming or foraging individuals and nine instances were of resting individuals (where individuals were drifting motionless along the substrate before being approached by divers; Table 1), five instances on a sandy bottom at ~ 42 m depth, and the rest among the reef structures ~ 25 m depth.Instances of both intraspecific and interspecific communal hunting was observed with some level of apparent coordination.Additionally, the sympatric yellow-lipped sea krait (L.colubrina) was encountered on three dives, but all individuals of this species were observed to be solitary and at depths of < 20 m. Intraspecific coordinated and communal hunting On multiple occasions, two to six kraits were moving in closely bound groups (< 2 m apart) parallel to each other and to the reef at a ~ 0.5 to 2 m distance from the reef (Fig. 1).Individuals largely seemed to follow each other with the members of the front of the group located more closer to the reef than those in the back.Once those in the front of the group entered crevices and holes in the reef, those in the back remained outside at 1-2 m distance from the crevices rather than continuing swimming or exploring other crevices.Our observations were highly opportunistic, but at least in one instance, while two individuals exited the crevices, three others started searching new crevices, while those that did the prior searches followed the new search group.Entering of kraits into the crevices visibly flushed the fish hiding in the crevices, but we did not clearly observe any instances of conspecifics that remained outside successfully capturing the escaping fish.Although a successful capture of prey was not observed in this occasion, likely due to the short observation time, the consistent following and stopping of the group as the lead krait interrogated crevices suggests there may be a benefit to individuals that hunt in a group. Interspecific coordinated and communal hunting On seven occasions, up to four kraits were observed foraging with bluefin trevally (Caranx melampygus) and in three occasions with longface emperors (Lethrinus olivaceus) (Fig. 2).These observations were relatively rare, however there was a clear distinction in roles played by the sea kraits and fish during these foraging events.In most cases, sea kraits foraged within cervices, while fish coordinated movements closely to follow the kraits and capture prey fish that escaped from the reef crevices.In one instance, a krait undertaking search swimming behaviour abruptly stopped over a coral crevice where three bluefin trevally were searching for prey.The krait commenced investigating the crevice and seemed to successfully capture prey in the reef crevice, which changed the behaviour of the trevally to hover directly over the krait as it commenced feeding.Each trevally seemed to compete for the position close to the opening of the crevice the krait was investigating, presumably to capture any other escaping fish, with one trevally driving the other individuals away.In these cases, it was difficult to determine if the predatory fish played any active role in the communal hunts (in contrast to just following kraits), however their presence along the reef pushed the prey fish into the reef and presumably increased the prey capture success for kraits within the reef crevices. Discussion Known strategies of foraging in sea snakes range from 'float and wait' tactics, where pelagic snakes ambush small fish that congregate under floatsam 22 , to browsing across large areas to locate prey and food sources 23 , with foraging primarily considered solitary behaviours 24 .Foraging studies of sea snakes in the past have focused on what cues individual snakes utilise 25 , and how they use favorable environmental conditions to maximize prey detection and capture e.g., tidal cycles 26,27 .However, little has been recorded on the use of coordinated hunting tactics between conspecifics, and between other reef predators, to increase foraging success.In most instances, observations of sea snakes in other reef systems seem to suggest individuals rarely respond or react to conspecifics when in close proximity during foraging 24 , nevertheless synchrony in capture data of turtle-headed sea snakes (Emydocephalus annulatus) in New Caledonia suggest that, at least in that population, some form of cryptic social organisation does exists 28 .Weather this social organisation provides some advantages to foraging success of individuals, however, is still unknown.There are several instances of snakes gathering for feeding due to a common stimulus.For example, dog-faced water snakes (Cerberus rynchops) and black-ringed mangrove sea snakes (Hydrelaps darwiniensis) gather in mangrove creeks in notable numbers during advancing and receding tides to feed on fish moving in narrow creeks (Somaweera pers.obs.), while multiple species of boids and colubrids form aggregations in cave passages to hunt bats entering and exiting roosts 29 .Galapagos racers (Philodryas biserialis) gather to hunt iguana hatchlings during hatching season (see https:// youtu.be/ B3Ojf K0t1XM), while Australian scrub pythons (Simalia amethistina) seasonally congregate below communal rookeries of birds during nesting season 30 .However, field observations in these cases did not establish that the snakes take each other's positions and/or actions into account during foraging.Therefore, it is difficult to confirm any form of coordinated feeding in these cases.Coordination is often assumed based on perceived complexity of hunting patterns.In all reported cases of apparent coordinated hunting in reptiles, some of the hunters drive the prey towards others, distract prey to facilitate the attack by others, or force prey into a compact area and then take turns attacking it 9 . Coordinated or cooperative hunting requires the focal hunter to take the other hunters' actions into account and therefore arguably requires complex cognitive ability 31 .Most of these hunts however are likely opportunistic where each individual attempts to increase the probability of catching the prey for itself.The limited, opportunistic observations made on Erabu sea kraits herein however suggests that potentially higher level of coordination is involved than simultaneous group hunting, given that individuals likely play different roles during a hunt.They are visually aware of the actions of the others in the group (e.g., remaining outside crevices when one group is hunting) and reacting accordingly (e.g., taking turns entering crevices).For some species and systems, It has been shown that coordinated hunts increase the effectiveness of prey capture, and thus increase food intake per individual 32 , but others have shown that this is not necessarily true 33 .It is possible that the communal behaviors such as these have other social functions. Interspecific coordinated hunts between snakes and other reef predators are extremely rare.Anecdotal footage of sea snakes undertaking similar interspecific group hunting have been recorded elsewhere, e.g., group hunting by Erabu sea kraits with yellow goatfish (Parupeneus cyclostomus) and bluefin trevally (Caranx melampygus) in other locations in the Banda sea (see https:// www.bbc.co.uk/ progr ammes/ p0038 t09); between the closely related Katuali sea kraits (Laticauda schistorhyncha) and bluefin trevally in Niue (https:// youtu.be/ Q7pka zXvUG4); and between olive sea snakes (Aipysurus laevis) and coral trout (Plectropomus leopardus) on Ningaloo Reef, Western Australia (Hans Kemps, pers.obs.).In these cases, the kraits possibly play a similar function as moray eels (Gymnothorax javanicus) do in their recorded interspecies coordinated hunting observations with coral groupers (Plectropomus pessuliferus) 5 , where groupers corner prey into the reefs while eels flush cryptic prey out.The coordination between the two species results in mutually beneficial outcomes of increased prey capture success for both eels and groupers during the hunt. Although opportunistic, our observations suggest that Erabu sea kraits potentially undertake intra-and interspecific coordinated communal hunting.These observations indicate that by hunting in groups (with conspecifics or other predatory fishes), all members of the hunting party are likely to increase their chances of prey capture, however this is yet to be quantified.Here we have developed an ethogram of key behaviours observed, however there is a need to gain further information to build time-budgets for each of these behaviours and assess any observable patterns in communal and cooperative hunting in these species.Future research should also work to quantify the potential advantage Erabu sea kraits obtain from communal hunting.This could include the identification and longer-term tracking of individuals within hunting parties via marking and/or telemetry and quantifying if hunting in groups results in higher prey capture rates than solitary hunts.It also remains unclear whether involvement in these coordinated and communal hunts is an individual personality trait, and how individuals coordinate and synchronize their hunting.Our observations add to a growing body of literature on higher cognition levels than previously assumed among reptiles, specifically snakes. -specific) Swimming closely behind another sea krait that is conducting prey searching behaviour Swimming < 2 m above reef structure Head pointed towards lead krait Tongue flicking Following (intra-specific) Swimming closely behind/among other predatory fish (usually larger than the krait) Swimming < 2 m above reef structure Head pointed towards predatory fish or towards direction of https://doi.org/10.1038/s41598-023-48684-3www.nature.com/scientificreports/ Figure 1 . Figure 1.Records of intraspecific cooperative hunting by groups of Erabu sea kraits (Laticauda semifaciata) during recreational dives at 'the Cathedral' dive site in Southern Lombok, Indonesia.Sea kraits were observed swimming slowly in groups along a section of reef displaying cooperative hunting by focal kraits flushing prey out from cervices while conspecifics following behind. Figure 2 . Figure 2. Records of interspecific group foraging between Erabu sea kraits and other predatory fish at dive sites in Southern Lombok.Predatory fish species that were observed to communally hunt with sea kraits included bluefin trevally (Caranx melampygus; top) and longface emperors (Lethrinus olivaceus; bottom).
3,771.6
2023-12-06T00:00:00.000
[ "Environmental Science", "Biology" ]
Magnetic interactions in BiFe0.5Mn0.5O3 films and BiFeO3/BiMnO3 superlattices The clear understanding of exchange interactions between magnetic ions in substituted BiFeO3 is the prerequisite for the comprehensive studies on magnetic properties. BiFe0.5Mn0.5O3 films and BiFeO3/BiMnO3 superlattices have been fabricated by pulsed laser deposition on (001) SrTiO3 substrates. Using piezoresponse force microscopy (PFM), the ferroelectricity at room temperature has been inferred from the observation of PFM hysteresis loops and electrical writing of ferroelectric domains for both samples. Spin glass behavior has been observed in both samples by temperature dependent magnetization curves and decay of thermo-remnant magnetization with time. The magnetic ordering has been studied by X-ray magnetic circular dichroism measurements, and Fe-O-Mn interaction has been confirmed to be antiferromagnetic (AF). The observed spin glass in BiFe0.5Mn0.5O3 films has been attributed to cluster spin glass due to Mn-rich ferromagnetic (FM) clusters in AF matrix, while spin glass in BiFeO3/BiMnO3 superlattices is due to competition between AF Fe-O-Fe, AF Fe-O-Mn and FM Mn-O-Mn interactions in the well ordered square lattice with two Fe ions in BiFeO3 layer and two Mn ions in BiMnO3 layer at interfaces. which is expected to facilitate the study of the Fe-O-Mn interaction. In this paper, BFMO films and BFO/BMO superlattices (simply denoted as BFO/BMO) were grown on (001) STO substrates. Spin glass behavior were observed in both samples. The Fe-O-Mn interaction has been confirmed to be AF by X-ray magnetic circular dichroism (XMCD) measurements. Spin glass in BFMO can be categorized as cluster spin glass, while spin glass in BFO/BMO results from competing AF and FM interactions at interfaces. Figure 1(a) shows the X-ray diffraction (XRD) patterns of BFMO and BFO/BMO with LaNiO 3 (LNO) as buffer layer in Bragg-Brentano geometry using a D/teX Ultra detector (1D detector). Only (001) and (002) diffraction peaks can be observed, indicating the high (001) orientation, which is due to well matching of lattice constant of BFO (3.96 Ǻ ), BMO (3.95 Ǻ ) and LNO (3.838 Ǻ ) to STO (3.905 Ǻ ) 12,[17][18][19] . The out-of-plane lattice constants are calculated to be 3.96 Ǻ for BFMO and 3.93 Ǻ for BFO/BMO, respectively. The epitaxial growth of BFO/BMO was further confirmed by a high resolution transmission electron microscope (HRTEM), as shown in Fig. 1(b). However, due to the same crystal structure and similar atomic number of Fe and Mn for the epitaxial layers of BFO and BMO, the interface in the superlattice structure cannot be clearly resolved in the HRTEM images. Considering the pseudo-cubic lattice constant of BFMO of 3.93 Ǻ 13 , the slightly larger c lattice constant of BFMO is due to inplane compression from the STO substrate, and strain relaxation possibly happened in BFO/BMO. The strain due to the lattice mismatch could introduce the contrast variation at the interface between BFO/BMO and STO, shown in Fig. 1(b). According to the phase diagram, BFMO displays predominated orthorhombic structure 20 . The h-2h XRD patterns of BFMO and BFO/BMO were carefully measured in parallel beam geometry using a scintillation detector (the inset of Fig. 1(a)). Due to the close atomic scattering factors of Fe 31 and Mn 31 , the (001) superlattice diffraction peak was hardly resolved. The (002) superlattice peak of BFO/BMO, marked by an arrow, can be clearly seen in the inset, which is absent in the XRD pattern of BFMO. The scintillation detector in parallel beam geometry is much less sensitive than the D/teX Ultra detector in Bragg-Brentano geometry which should reveal the impurity phases. Thus, this superlattice peak cannot be due to impurity phases since it was absent when we used Bragg-Brentano geometry as shown in the main frame of Fig. 1(a). On the other hand, compared with Bragg-Brentano geometry, the parallel beam geometry is more sensitive for the reflection from surfaces and interfaces of films. As shown in the inset of Fig. 1(a), clear observation of the superlattice peak confirms the high concentration of BFO/BMO interfaces. The period of superlattice was calculated using Bragg equation to be about 0.91 nm, which is slightly larger than the designed period (0.79 nm). This is due to the limitation of our PLD system that the layer by layer growth with each layer thickness of 1 unit cell cannot be strictly fulfilled in our work. However, alternative growth of BFO and BMO layers with thickness of roughly 1 pseudo-cubit unit cell provides high concentration of BFO/BMO interfaces and a high interface/bulk ratio, which might facilitate the characterization of Fe-O-Mn superexchange interaction. It has been theoretically predicted the possible checkerboard superstructure in BFO/BMO, which is (110)-oriented superlattice 21 . However, due to the (001) growth direction of our films with alternative (001) BFO and BMO layers, the checkerboard superstructure seems unlikely to be formed. Figure 2 shows the X-ray photoelectron spectroscopy (XPS) core level spectra of Fe 2p and Mn 2p in BFMO and BFO/BMO, calibrated by the C 1s line (284.8 eV) binding energy 22 . The binding energy of Fe 2p 3/2 is at 710.3 eV for BFMO and 710.1 eV for BFO/ BMO. The valence states of Fe in both samples are almost the same after considering the XPS accuracy of 60.1 eV. However, the decomposition of Fe 2p 3/2 spectrum into a superposition of symmetric components is questionable, thus it is complicated to obtain the exact concentration of Fe 21 and Fe 31 23 . A satellite peak can be observed at 8.7 eV for BFO/BMO and 8.6 eV for BFMO above the corresponding principal peak. Due to the different d orbital electron configuration, Fe 21 and Fe 31 show satellite peak at 6 eV or 8 eV above their 2p 3/2 principle peaks, respectively 14 . The Fe 2p core level spectra of BFMO and BFO/BMO are similar to those previously reported BFO, confirming that Fe is mainly in 13 valence state 14 . The binding energy of Mn 2p 3/2 is at 641.8 eV for BFMO and 641.6 eV for BFO/BMO, respectively. A shoulder peak marked by arrow below this energy can be observed in both samples, which originates from a small concentration of Mn 21 14 . Results BFO is a well known multiferroic material with ferroelectricity above room temperature. However, leakage current is a big obstacle for observation of ferroelectric hysteresis loops for both samples. The ferroelectric nature of BFMO and BFO/BMO was characterized at room temperature using piezoresponse force microscopy (PFM), as shown in Fig. 3. The clear local PFM hysteresis loops for both samples suggest their ferroelectricity. It was recently pointed out that similar PFM hysteresis loops were observed in soda-lime glass due to dipoles induced by ionic motion under external electric field 24 . We measured the phase hysteresis loops with various maximum voltages, as shown in Fig. 3(a) and (e), and coercivity of both samples shows little variation. Furthermore, we measured the amplitude hysteresis loops with different frequencies, as shown in Fig. 3(b) and (f). The observed amplitude hysteresis loops for both samples are insensitive to the time periods. Thus, the ferroelectricity in both BFMO and BFO/BMO can be inferred from the PFM results 24 . We further studied the retention of domains written by the PFM tip, as shown in Fig. 3(c), (d), (g) and (h). After 10 hours, negligible changes were observed in domain patterns for both samples. Figure 4(a) shows the field dependent magnetization (M-H) curves of BFMO measured at different temperatures. As can be seen, a clear soft FM hysteresis loop can be observed at 300 K, indicating room temperature ferromagnetism. Similar phenomenon has been reported by Choi et al., which was explained by larger strain in ultrathin film 12 . However, others reported only negligible weak ferromagnetism 14 . It should be noted that magnetic properties of pure (001) STO substrate have been checked, the observed weak ferromagnetism is much smaller than the magnetization values of both samples, and can be neglected. The observed magnetization is smaller than that reported by Choi et al. 12 , but much larger than that by Bi et al. 14 . With decreasing temperature, not only the magnetization increases, but also the coercivity increases drastically, especially below 200 K 25 . The M-H curves measured at different temperatures for BFO/BMO show similar behavior. The M-H curves are superposition of paramagnetism and weak ferromagnetism. With applying a maximum magnetic field of 40 kOe, the total magnetization of BFO/BMO is almost the same as that of BFMO, but the FM magnetization of BFO/BMO is much smaller than that of BFMO. Zero field cooled (ZFC) and field cooled (FC) temperature dependent magnetization (M-T) curves were measured under 100 Oe from 5 K to 300 K with cooling field of 10 kOe for FC curves, as shown in Fig. 4(b) (BFMO) and (d) (BFO/BMO). A broad peak at around 243 K can be observed in the ZFC M-T curve for BFMO, but only a kink has been observed in the FC M-T curve (inset of Fig. 4(b)). With increasing doping concentration of Mn, T N of BFO continuously decreases 26 . T N of BFMO ceramics decreases to 440 K 27 . Due to positive formation enthalpy of ordered structure of Mn and Fe, the distribution of Mn and Fe is inhomogeneous 14 . As a result, Fe-rich and Mn-rich clusters will form. Three exchange interactions exist in the system, namely Fe-O-Fe, Mn-O-Mn and Fe-O-Mn with different ordering temperature. T N of 440 K has been attributed to Fe-O-Fe ordering, which is above the measuring limit of our system. T N of BFMO prepared in high pressure with ordered Mn and Fe structure is 270 K 25 , and Du reported the second ordering temperature at 260 K 27 . Thus, we attribute the observed peak at 243 K in M-T curves to the onset of Fe-O-Mn interaction. It is interesting to note that the deviation of FC magnetization from ZFC one at the freezing point (temperature at which ZFC peak occurs) for both samples has been observed. This is a feature, but not exclusive, for the spin glass system 28,29 . In both cases of superparamagnet and spin glass, finite dipolar interaction between the spins results in the deviation of FC-ZFC curves at temperature lower than blocking or freezing temperatures and FC magnetization increases continuously as the temperature is lowered 28 . One of the important characteristic features of spin glass is the phenomenon of aging. To confirm the spin glass behavior in BFMO and BFO/BMO, thermoremnant magnetization (TRM) depending on time was measured at various temperatures below 350 K by cooling the sample in a field of 10 kOe from 350 K to the final temperature, abruptly decreasing field to 500 Oe to measure the time-dependent magnetization. For the magnetic relaxation in spin glass, a stretched exponential decay is expected 28,30 , where the glassy component M r mainly contribute to the observed relaxation effects. The time constant t and exponent n are related to the relaxation rate of spin glass. For 0,n,1, it stands for spin glass system 28,31 . In equation (1), M 0 is added to account for the nonrelaxed magnetization responding to the applied field of 500 Oe 30 . Figure 5(a) and (c) show typical relaxation curves measured at 5 K for BFMO and BFO/BMO, respectively. Solid curves are the best fitting with equation (1), and fitting parameters are shown. It can be clearly seen that the fitting to experimental data is very well. The parameter of n is 0.40 for BFMO and 0.53 for BFO/BMO, which is close to the other spin glass systems 28,31,32 . Spin glass is generally due to site disorder and lattice frustration, leading to frustrated interactions 33 . Furthermore, if FM clusters are considered as macroscopic spins with competing interactions, spin glass behavior can be observed, termed to cluster spin glass 34 . These spin systems qualitatively exhibit similar and characteristic variations of magnetizations 35 . Thus, XMCD measurements were performed to further clarify the magnetic ordering at low temperatures. Figure 6 shows X-ray absorption spectroscopy (XAS) and XMCD spectra recording at Fe and Mn L 2,3 edges for BFMO and BFO/BMO measured at 4.2 K under a magnetic field of 10 kOe. The line shape of Fe XAS spectra for both BFMO In contrast to the similar line shape of Fe and Mn XAS spectra in BFMO and BFO/BMO, XMCD spectra are obviously different. As can be seen in Fig. 6, Fe XMCD spectrum for BFMO is very close to that of c-Fe 2 O 3 , i.e., having two opposite peaks at O h and T d sites, respectively 36 doping on structure distortion of FeO 6 octrahedra with antiparallel alignment of spins leads to opposite signs of XMCD signal at different photon energy. Thus we try to correlate Fe XMCD spectrum to site disorder of Fe ions, since similar XMCD spectrum of Fe L edge has been observed in BFO film with high density of domain walls 39 . As shown in Fig. 6(b), a strong peak can be observed in Mn XMCD spectrum, which is much similar to that in BMO film grown on (001) STO substrate 40 , suggesting the formation of Mn-rich clusters with FM Mn-O-Mn interactions. In comparison with weak Fe XMCD signal, the relatively strong Mn signal implies that the enhanced ferromagnetism in BFMO at low temperature is mainly from Mn, instead of Fe. In contrast to BFMO, BFO/BMO exhibits a small peak in Fe XMCD spectrum, suggesting a weak magnetic contribution from Fe in BFO layers. Previous reports have shown that 4% XMCD signal in BFO/La 0.7 Sr 0.3 MnO 3 bilayers, which corresponds to a magnetic moment of about 0.6 m B /Fe 15 , and 1% XMCD signal in BFO/ La 0.5 Ca 0.5 MnO 3 corresponding to that of around 0.1 m B /Fe 16 . Accordingly, the observed 1.8% XMCD signal in BFO/BMO can be roughly estimated as 0.2 m B /Fe, which is much larger than the canted moment (0.03 m B /Fe) in bulk BFO 15 . This suggests that the observed 1.8% XMCD does not originate from spin canting in BFO due to the Dzyaloshinskii-Moriya (DM) interaction, while is more likely attributed to the induced spin canting by exchange coupling between Fe and Mn at interfaces 15,16 . The weakening of antiferromagnetism and induced weak ferromagnetism in BFO layer at interface have also been observed in BFO/CoFe and BFO/CoFeB due to exchange coupling 41,42 . For Mn XMCD spectrum in the BFO/BMO, we have found a splitting at L 3 edge, different from the single peak observed in BMO 40 . This discrepancy could be explained as that BMO layer is not strictly one unit cell thick, thus Mn ions have different neighboring environments. For instance, Mn ions inside BMO layer have only the nearest neighboring Mn ions, while Mn ions at interfaces have the nearest neighboring Fe ions, leading to various distortion on MnO 6 octahed- ral. These two different locations of Mn ions possibly lead to the splitting at L 3 edge. The same sign of split XMCD peaks suggests that Mn spins tend to align parallel, confirming the FM interaction of Mn-O-Mn. Comparing Fe and Mn XMCD spectra in BFO/BMO, their opposite sign suggests an antiparallel alignment of the corresponding magnetic moments, confirming the AF exchange interaction of Fe-O-Mn at interfaces. The Mn spins in BMO layer tend to align parallel to each other due to FM interaction between Mn ions. Thus the spins of neighboring Fe ions in BFO layer will be forced to align parallel through the AF interaction of Fe-O-Mn at interfaces. Together with AF interaction between neighboring Fe ions in BFO layer, spin canting might be enhanced, leading to enhanced weak ferromagnetism. Discussion The XMCD results on BFMO clearly demonstrated the formation of FM Mn-rich clusters in AF Fe-rich matrix, indicating the cluster spin glass 34 , as the schematic diagram shown in Fig. 5(b). Considering the layered growth structure of BFO/BMO, a square lattice has been formed at interface with two Fe ions in BFO layer and two Mn ions in BMO layer, as shown in the schematic structure in Fig. 5(d). Due to AF exchange interaction, the spins of two neighboring Fe ions, Fe1 and Fe2 in BFO layer, align antiparallel. The AF exchange interaction between neighboring Fe and Mn at interface will force the spins of Mn ions to align antiparallel to the neighboring Fe ions, thus the spins of neighboring Mn ions will be forced to align antiparallel to each other. However, FM exchange interaction between neighboring Mn ions will force the spins to align parallel to each other, leading to high frustration. Geometrical frustration is generally observed in triangle lattices without disorder that has AF exchange interaction between the nearest neighboring magnetic ions. Magnetic frustration might also be realized in the well ordered square lattice with finely tuned AF and FM interactions 33 . The spin glass in BFO/BMO can be understood by the competing AF (Fe-O-Fe in BFO, Fe-O-Mn at interface) and FM (Mn-O-Mn in BMO) exchange interactions at interfaces with well ordered square lattices similar to geometrical frustration in triangle lattices 33 . In summary, comparative structural and magnetic studies have been performed on multiferroic BFMO and BFO/BMO prepared by PLD on (001) STO substrates. The ferroelectricity at room temperature for both samples has been inferred from the observation of PFM hysteresis loops and electrical writing of ferroelectric domains. Irreversibility in FC and ZFC M-T curves has been observed in both samples, with a cusp at around 243 K for BFMO and 190 K for BFO/ BMO in ZFC curves. The decay of thermo-remnant magnetization with time confirms the spin glass behavior. XMCD measurements confirm the AF interaction of Fe-O-Mn. Spin glass behavior in BFMO has been classified to cluster spin glass due to Mn-rich FM clusters embedded in AF matrix. Spin glass behavior in BFO/BMO is due to competition among AF Fe-O-Fe interaction in BFO, AF Fe-O-Mn interaction at interface, and FM Mn-O-Mn interaction in BMO in the well ordered square lattices at interfaces of BFO and BMO. Methods BFMO, BFO and BMO ceramic targets were prepared by tartaric acid modified sol-gel method 43 . BFMO and BFO/BMO films were deposited on (001) STO substrates by pulsed laser deposition (PLD) system with a KrF eximer laser of 248 nm and a repetition rate of 5 Hz. Laser energy was 300 mJ and target-to-substrate distance was kept at 5 cm. Substrate temperature T s was kept at 750uC with oxygen pressure P O2 of 2 Pa. BFO/BMO was prepared by alternatively ablating BFO and BMO targets with 5 pulses for each layer and the stacking sequence was repeated 50 times. The thickness of about one pseudo-cubic unit cell was deposited by 5 laser pulses, estimated from the average growth speed of BFO film. For BFMO, 500 pulses were selected. After deposition, both BFMO and BFO-BMO were annealed for 0.5 h at 550uC and cooled down to room temperature in an oxygen pressure of 1 3 10 5 Pa. The film thickness was determined by the cross-sectional scanning electron microscopy (SEM, FEI) images to be 34 nm for BFO/BMO and 25 nm for BFMO. For magnetization measurements, the films were directly deposited on STO surface, while a LNO buffer layer with thickness of about 30 nm was deposited first at T s 5880uC and P O2 540 Pa for the other measurements. The crystal structure of films was examined by XRD with Cu Ka radiation (Rigaku Smartlab3). The valence states of Fe and Mn were characterized by XPS (ThermoFisher SCIENTIFIC) with Al Ka X-ray source (hn51486.6 eV). The crosssectional specimen for HRTEM was prepared by mechanical polishing followed by argon ion milling. The thinned sample was examined using a JEM-200CX. The surface morphology and ferroelectric domains were characterized by scanning probe microscopy (SPM, Asylum Research Cypher). Temperature dependent magnetic properties were carefully measured by a commercial SQUID-VSM (Quantum Design) from 5 K to 300 K. XAS measurements were performed at the beam line UE46/PGM-1 at BESSY II (Helmholtz-Zentrum Berlin) with a circular degree of polarization of around 90%. The spectra were acquired and normalized to the incident beam in total electron yield (TEY) mode by recording the sample drain current as a function of photon energy. Right-handed (m 1 ) and left-handed (m 2 ) circularly polarized XAS spectra were obtained by reversing photon helicity under H 5 10 kOe. The field is parallel to the beam, and the beam is perpendicular to the surface plane of our samples. XMCD spectrum was obtained as (m 1 2m 2 ) and normalized to the maximum peak intensity of XAS [(m 1 1 m 2 )/2].
4,821.6
2015-03-13T00:00:00.000
[ "Physics", "Materials Science" ]
Secondary metabolites from the Aspergillus sp. in the rhizosphere soil of Phoenix dactylifera (Palm tree) The soil-derived fungus Aspergillus sp. isolated from the rhizospheric soil of Phoenix dactylifera (Date palm tree) and cultured on the large scale solid rice medium yielded a novel compound 1-(4-hydroxy-2,6-dimethoxy-3,5-dimethylphenyl)-2-methyl-1-butanone (1) and four known compounds; citricin (2), dihydrocitrinone (3), 2, 3, 4-trimethyl-5, 7-dihydroxy-2, 3-dihydrobenzofuran (4) and oricinol (5). The structures of the isolated compounds were elucidated by MS, 1H, 13C and 2D NMR spectra. Compound (1) exhibited potent antimicrobial activities against Staphylococcus aureus with MIC values of 2.3 μg mL−1 and significant growth inhibitions of 82.3 ± 3.3 against Candida albicans and of 79.2 ± 2.6 against Candida parapsilosis. This is the first report to isolate metabolites from the fungus Aspergillus found in temperate region date plant rhizospheres. Electronic supplementary material The online version of this article (10.1186/s13065-019-0624-5) contains supplementary material, which is available to authorized users. Introduction The rhizosphere is the portion of the soil which is surrounding the plant root [1,2]. This soil inhabited a great microbial diversity than nonrhizosphere soil [3].The microorganisms in the rhizosphere play a great biological role in the growth of host plant. This occurs through the defense mechanism provided by the rhizosphere microbial communities against pathogens or through providing nutrition to the plant by their role in mineralization of different organic compounds [4,5]. Fungi for instance, provide the plant with phosphorous while asymbiotic and symbiotic bacteria play an important role in nitrogen fixation and instantly increase of the available nitrogen in the rhizosphere region [6]. However, the diversity of microbial strains varies from one rhizosphere to another according the species of the plant and the environmental factors [7,8]. Recent reports show that the rhizosphere region of soil hills is untapped source of clinically important microorganisms, especially fungi [9][10][11][12][13][14] which produce a large number of bioactive metabolites. However, the attention for isolation of novel compounds with great pharmaceutical value from this fungal habitat still limited comparing to endophytes and marine niches. Phoenix dactylifera, usually known as a date palm tree, it is globally valued for its health and nutritional-promoting fruit [15]. This tree grown in the arid and semi-arid regions especially areas which have long, dry summer and mild winter are best for date palm cultivation [16]. Kingdom of Saudi Arabia is the second top producer and exporter of dates since this tree covers more than 170 thousand hectares [17]. The filamentous fungi Aspergillus are ubiquitous opportunistic moulds that are pathologically and therapeutically important [18]. Many literatures reported numerous bioactive metabolites isolated from Aspergillus sp. [19][20][21]. These metabolites showed significance therapeutic importance such as anticancer and antimicrobial activities. The biological value of this fungal species, make it of considerable interest to the scientific research community for discovering further novel bioactive compounds [22]. As a part of our ongoing search on bioactive fungal secondary metabolites from unexplored niches [23,24], in this study, a fungal strain RO-17-3-2-4-1, identified as Aspergillus sp., was isolated from the rhizosphere soil of P. dactylifera, Wadi Hanifa, 15 km Northwest of Riyadh, Saudi Arabia. To the best of our knowledge, it is the first research report on the isolation of secondary metabolites from the rizosphere soil of temperate region plants P. dactylifera. Isolation and structural identification Disease suppressive soils offer effective protection to plants against infection by soil borne pathogens. Therefore, suppressive soils are considered as a rich source for the discovery of microorganisms which provides novel secondary metabolites on large scale culture. To date, a plethora of work has been done on the fungal culture of the obtained microorganism from these soils which led to the isolation of novel biologically active constituents. In our ongoing research on the findings of soil based microorganism and its culture for the identification of secondary metabolites, we worked on the crude ethyl acetate extract of the interrhizospheric fungus (Aspergillus sp.). It exhibited considerable antimicrobial activity against the tested bacterial and fungal strains. Bioactivity-guided fractionation led to the isolation of one new compound 1-(4-hydroxy-2,6-dimethoxy-3,5-dimethylphenyl)-2methyl-1-butanone 1, together with four known compounds; citricin 2, dihydrocitrinone 3, 2, 3, 4-trimethyl-5, 7-dihydroxy-2, 3-dihydrobenzofuran 4, and oricinol 5 ( Fig. 1). Herein, we report the structure elucidation and biological evaluation of the isolated compounds. of the NMR data with literature values. Biological activities All isolated compounds (1-5) were evaluated for their antimicrobial activity against pathogenic bacteria and fungi by disc diffusion method by measuring the inhibition zones and for the active compounds (MIC) minimum inhibitory concentration values were also determined. Interesting antimicrobial properties were observed ( For human pathogenic fungi, the simple aromatic compound 5 disclosed the most significant growth inhibitions of 92 ± 3.9 and 90 ± 2.8 at 50 μg mL −1 against Candida albicans and Candida parapsilosis, respectively. Followed by compounds 1, 2, and 4 with higher inhibition value than the positive control Itraconazole a broad-spectrum antifungal drug. Compounds 3 neither showed antifungal nor antibacterial activity at 25 μg mL −1 . These result suggested that the aromatic ring in polyketides may strengthen the antibacterial and antifungal activities of this class of compounds. General experimental procedures The experimental procedure has written in Additional file 1. Plant and fungal strain materials The fungal strain was isolated from rhizosphere soil of P. dactylifera, Wadi Hanifa, 15 km Northwest of Riyadh, KSA, in October 2017 and deposited in the laboratory of Pharmacognosy department, KSU. The fungus was identified as Aspergulis sp. (GenBank accession No. MK028999) according to DNA amplification sequencing of the fungal ITS region as reported in literature [28,29]. Antibacterial assay The antibacterial activity was determined according the reported method [20]. The Gram-positive, Staphylococcus aureus (CP011526.1) and Bacillus licheniformis (KX785171.1) and the Gramnegative, Enterobacter xiangfangensis (CP017183.1), Escherichia fergusonii (CU928158.2) and Pseudomonas aeruginosa (NR-117678.1) bacteria were suspended in a nutrient broth for 24 h then spread on Muller Hinton agar plate. 10 µL of the sample solution were loaded in wells using Amikacin as positive control. The clear area which was free of microbial growth was measured triplicate to detect the diameter of zone of inhibition and the mean were recorded. The lowest concentration of the tested isolated compounds that will inhibit the visible bacterial growth, minimal inhibitory concentration (MIC, μg mL −1 ) was determined as well [28]. Antifungal assay The antifungal activity of isolated compounds was assessed using well diffusion and broth microdilution techniques with positive control, Itraconazole. The tested pathogenic fungi were Candida albicans and C. parapsilosis. According to Gong and Guo [29], in SDA plate the sample solutions (100 µl), approximately 3 × 10 6 colonyforming units (CFU) mL −1 was smeared. Wells were created in SDA plates and loaded with the 10 µg of the tested compounds. The plates were then incubated at 37 °C for 1 day. The diameters (in mm) of zone of inhibition were measured and the rates of growth inhibition were obtained according the following formula taking on consideration ± SD as means: where d c : Diameter of the untreated control fungus, d s : Diameter of the sample-treated fungus and d 0 : Diameter of the fungus cut. Conclusions Polyketides possess a wide range of significant biological activities, such as anti-tumor, antimicrobial and antiinflammatory. In our study, one new and four known metabolites were obtained from the large scale fermentation of the interrhizospheric fungus Aspergillus sp., and their antimicrobial activity was evaluated. The isolation of compounds 1-5 suggested that this Aspergillus strain is a powerful producer of polyketides with diverse structures. Compounds 1 showed significant antimicrobial activity against two pathogenic fungal strains Candida albicans and C. parapsilosis and a pathogenic strain of bacteria Staphylococcus aureus with MIC 2.3 μg mL −1 . This study shows the importance of rhizospheric soil inhibited fungi as untapped source for novel secondary metabolites. Additional file Addit ional file 1 NMR, Mass spectrum & chromatogram of extracts.
1,817.2
2019-08-07T00:00:00.000
[ "Biology" ]
Exact solutions for chemical concentration waves of self-propelling camphor particles racing on a ring: A novel potential dynamics perspective A potential dynamics approach is developed to determine the periodic standing and traveling wave patterns associated with self-propelling camphor objects floating on ring-shaped water channels. Exact solutions of the wave patterns are derived. The bifurcation diagram describing the transition between the immobile and self-propelling modes of camphor objects is derived semi-analytically. The bifurcation is of a pitchfork type which is consistent with earlier theoretical work in which natural boundary conditions have been considered. Introduction A challenge in modern-day research in biophysics and bioengineering is to create and understand biochemical particle systems that mimic biological cell motion. In particular, at issue is to construct particlessurface systems in which particles have the capability to move themselves over certain distances by converting chemical energy into kinetical energy [1]. As pointed out in reference [1], a fundamental class of such man-made self-moving systems is given by objects that are floating on a medium, are driven by differences in the surface tension of that medium, and at the same time produce gradients of some substance that affects the surface tension of the medium. Irrespective of engineering applications, selfpropelling chemical systems allow us to study principles that might be important for our understanding of the physics of life, in general, and the so-called active Brownian particles [2], in particular. In fact, it has been shown that under certain conditions self-propelling chemical "motors" exhibit negative friction terms [1,3] that are a hallmark of active Brownian systems [2,[4][5][6][7][8]. A variety of self-propelling chemical motors have been studied [3,[9][10][11][12][13][14]. In particular, the self-motion of camphor particles moving on water has been extensively studied, in particular, by Nakata and colleagues [15][16][17][18][19][20]. In this context, we would like to point out that not only single camphor particles have been considered but also the interaction between two self-propelling camphor particles has been examined [21], the collective motion of a small number (about 10) of camphor particles has been investigated [22,23], and the spatial and velocity distributions of many interacting camphor particles have been determined [24,25]. A theoretical model for the self-motion of a single, solid camphor disc floating on water has been developed in terms of a Newtonian equation for the disc and a reaction-diffusion equation for the concentration of the surface-tension active camphor molecules on the water surface [26]. In this context, analytical solutions of the concentration wave patterns under natural boundary conditions have been derived and it has been shown that the self-propelling mode bifurcates from an immobile mode by means of a pitchfork bifurcation. However, an experimentally very useful paradigm is the self-motion of a camphor disc on a ring channel [15][16][17][18][19][20]. In view of the importance of ring-shaped designs for experimental research, the present study goes beyond the case of natural boundary conditions. To the best of our knowledge, we will for the first time present a theoretical analysis of the periodic case that can directly be applied to and compared with the laboratory situation. In addition, while it is plausible to assume that the chemical wave patterns of camphor particles on a ring exhibit a single peak, a clear explanation why this should be the case has not been given so far. Such an explanation will be given below. To this end, a potential dynamic perspective for the coupled particle-wave system will be developed. Explicitly, the aim of the current study is threefold. First, we will work out a potential dynamics approach to wave patterns of reaction-diffusion systems in order to qualitatively discuss the possible shapes of the camphor concentration patterns associated with the self-motion of camphor discs floating on a ring channel. Second, we will derive analytical solutions for the standing and traveling concentration waves associated with immobile and self-propelling camphor discs, respectively, on a ring channel. Third, we will numerically determine the bifurcation diagram for the case of periodic boundary conditions. In doing so, we will show that the pitchfork bifurcation derived earlier for natural boundary conditions can also be found in the case of periodic boundary conditions that is frequently used in experimental research. We consider a camphor disc traveling on a ring channel filled with water. Note that the disc releases camphor to the water surface, which implies that in this system we need to distinguish between the camphor disc and the camphor concentration on the water surface. The width of the ring channel is of the disc diameter such that the disc can move only in one direction, along the ring. The ring has radius R and circumference 2πR. The disc position is described by the periodic variable y(t ) ∈ [0, 2πR], where t denotes time. Moreover, the velocity of the disc is described by v(t ). The concentration u of camphor molecules on the water surface at time t and at a particular position x ∈ [0, 2πR] along the ring is described by the field variable u(x, t ) 0. The dynamics of the camphor disc is given by [26] d dt with a, γ, µ > 0. Accordingly, the disc satisfies a Newtonian equation with a force generated by the camphor concentration field u and a friction force proportional to the camphor disc velocity. From a mechanistic point of view, the camphor concentration affects the surface tension, which in turn acts as a force on the camphor disc (see introduction above). The effective force given as the first term on the right hand side of equation (1.1b) can be regarded as a gradient force of a potential, where u is the potential. That is, the camphor disc is driven away from regions of high camphor concentrations. The parameters γ and a are related to certain details of the aforementioned mechanistic relationship between camphor concentration, surface tension, and the force acting on the camphor particle, see reference [26]. Finally, in equation (1.1) the parameter µ denotes the friction coefficient. The camphor concentration field u(x, t ) satisfies the reaction-diffusion equation with k > 0. The term −k u describes the decay of the camphor concentration on the water surface due to dissolution and sublimation. The function F describes the increase of camphor concentration on the water surface due to the camphor disc depositing camphor molecules to the water surface. Mathematically speaking, F corresponds to a source term. In this context, note that we consider not too long time periods during which the mass loss of the camphor disc can be neglected. The source term F is defined by F = 1 for |∆| r and ∆ = x − y and F = 0 otherwise, where r > 0 is the radius of the camphor disc. Note that from a mathematical point of view for large r we may not think of the disc as a circular object. We may imagine a solid object that has the shape of a segment of the ring with segment length 2r measured along the curvilinear coordinate x. Then, we have r πR. In the case r = πR, the object would be a full, solid ring that covers the whole surface of the ring channel. It is important to note that equations (1. . This also leads to v(t ) = c. The camphor disc moves with the same velocity as the wave pattern. Substituting equation (2.1) into equation (1.2), we obtain The position y 0 of the camphor disc is arbitrary. That is, if there is a wave pattern g (z, y 0 ) with a particular camphor disc position y 0 , then y 0 + h has the solution g (z, y 0 + h) = g (z − h, y 0 ), which shifts the wave pattern by h. Without loss of generality, we put y 0 = 0 such that The wave pattern is subjected to periodic boundary conditions g (z) = g (z + 2πR), which implies that the relations g (0) = g (2πR) and dg (0)/dz = dg (2πR)/dz hold. The traveling wave equations (2.4) and (2.5) contain the standing wave equations as special case. For the standing wave we put c = 0 such that These equations have a symmetric solution g (z) = g (−z). Equations (2.4)-(2.7) can be solved using the potential dynamics method for solving reaction-diffusion equations (see e.g. reference [28,Chap. 9]). Accordingly, g is regarded as the position of a hypothetical point particle that evolves in time t and moves with a particle velocity v g . In doing so, the phase coordinate z is replaced by the time variable t . The particle motion is subjected to a potential force with a potential V (g , t ) that depends on time. Using these replacements [i.e., z → t and dg /dz → v g (t )], equa- 43002-3 and we are looking for periodic solutions g (t ) with period T = 2πR. During the time intervals [0, r ] and [T − r, T ] we have F = 1 and the potential V gives rise to the "force" −dV /dg = kg − 1. By contrast, for t ∈ (r, T − r ) we have F = 0 which implies −dV /dg = kg . In total, the potential is given by (2.9) The two different potential forms described by equation (2.9) are illustrated in figure 1 and will be referred to as type I and II potentials, respectively. Both potentials are inverted parabolic potentials. For } the potential has a peak at g = 1/k (top panel), otherwise the potential has a peak at g = 0 (bottom panel). Equations (2.8) and (2.9) have to be solved under initial conditions g (t = 0) and v g ( see equation (2.5). Moreover, the periodicity condition g (t + T ) = g (t ) implies that the boundary condi- Finally, we have the constraint g (t ) 0 because g in the original context reflects the concentration of camphor molecules. The potential dynamics subjected to these initial and boundary conditions is then determined by the two types of repulsive potentials shown in figure 1. Let us discuss period solutions g (t ) corresponding to standing wave patterns g (z) associated with an immobile camphor disc. For c = 0, equation (2.9) describes a Newtonian equation without damping that is solved under the initial conditions v g = 0 and g (0) = A > 0. If A > 1/k, then we have g → ∞ for t → ∞ because both potentials decay monotonously for g > 1/k. For g (0) = 1/k, a periodic solution is not possible for 0 < r < πR. However, in the limiting case r = πR, the dynamics of the hypothetical particle is subjected only to the potential I force with V max = 1/k (top panel of figure 1) such that g (0) = g (t ) = 1/k is the solution of equation (2.9). That is, the wave pattern g (z) is a constant when the water channel is completely filled with a solid, ring-shaped camphor object. In summary, for 0 < r < πR we have g (0) = A ∈ (0, 1/k). Due to the potential I force, for t ∈ [0, r ], the particle moves "downhill" with respect to the type I potential V , that is, to the left towards g = 0, and reaches at time t = r a point g (r ) = B. At t = r , the potential switches from type I to type II. Since we have v g (r ) < 0, the particle continues to move towards g = 0. However, at this stage it moves "uphill" with respect to the type II potential V and, consequently, is de-accelerated. At a time point t * , the particle has zero velocity (v g (t * ) = 0). At this instance, the particle has reached its minimal position g (t * ) = g min . Due to the impact of the potential II force, the particle is accelerated and starts to move "downhill" with respect to the type II potential V , that is, it moves to the right. Due to the symmetry of the problem at hand, the time point t * is half of the period: t * = T /2 = πR. When the particle moves to the right, g increases and eventually, at 43002-4 t = T −r , the particle reaches the position g = B again. Note that the "uphill" movement B → g min and the "downhill" movement g min → B are described by a time-reversible Newtonian equation, which implies that the trajectory g (t ) is symmetric with respect to t = T /2. At t = T − r we have v g > 0. In particular, we have v g (T − r ) = −v g (r ). Moreover, the potential switches from type II to type I. Due to the "initial" velocity v g (T − r ) > 0, the particle goes "uphill" in the type I potential V , slows down, and finally reaches the location g = A with v g = 0 at time point t = T . In short, the periodic solution follows the sequence and exhibits a single maximum at g (0) and a single minimum at g (T /2). Moreover, the trajectory is symmetric with respect to t = T /2 for t ∈ [0, T ] (and symmetric with respect to t = 0 for t ∈ [−T /2, T /2]), which means that the periodic standing wave pattern g (z) has a symmetry axis. Periodic solutions with v g (0) 0 related to traveling wave patterns g (z) induced by a self-propagating camphor disc can be discussed in a similar way. In this context, we note that a necessary condition for a period solution is g (0) < 1/k again. In order to see this, let us assume that g (0) > 1/k holds. For c < 0, this implies v g (0) > 0 which implies g → ∞ for t → ∞ because both potentials I and II decay monotonously for g > 1/k. For c > 0 and g (0) > 1/k, we see that the particle has initial velocity v g (0) < 0 and moves towards g = 1/k, which is the peak of the type I potential V (figure 1, top panel). If |v g (0)| is not sufficiently large, it will not reach the point g = 1/k. Rather, we will have v g (t ′ ) = 0 with g (t ′ ) > 1/k for t ′ ∈ [0, r ] which implies g → ∞ to t → ∞. That is, a necessary condition for a periodic solution starting at g (0) > 1/k is that |v g (0)| is sufficiently large such that the particle passes the point g = 1/k during the interval [0, r ]. However, the particle should return somehow to the initial position g (0) > 1/k. Once the particle has passed the point g = 1/k at t ′ ∈ [0, r ], the only way to return to the subspace g = (1/k, ∞) is to pass the point g = 1/k at a later time point t ′′ > t ′ with velocity v g (t ′′ ) > 0. From g (t ′′ ) = 1/k and v g (t ′′ ) > 0 it follows that g → ∞ for t → ∞. In summary, periodic solutions with v g (0) 0 and g > 0 only exist for g (0) ∈ (0, 1/k). By analogy to the previous discussion for the case c = v g (0) = 0, for c > 0 (camphor disc and wave pattern traveling to the right on the ring coordinate x) we obtain the sequence with 0 < r < t * < T − r < t * * < T and v g = 0 at t * and t * * . As indicated, the maximum is reached at t * * ∈ [T − r, T ] and does not correspond to the initial position g (0). In any case, there is a single minimum and a single maximum, which means that a traveling wave concentration pattern g (z) is single-peaked and exhibits a single minimum. The condition g (t * * ) > g (T ) = g (0) implies for the original problem that the camphor disc is located to the "right" of the peak of the traveling wave concentration pattern. For c < 0, we conclude in a similar vein that the traveling wave pattern is single-peaked and has a single minimum. As anticipated above, the potential dynamic picture reveals that both the standing and traveling wave patterns exhibit a single peak only. However, while the standing wave patterns g (z) exhibit a symmetry axis, traveling wave patterns g (z) are non-symmetric. The next objective is to explicitly formulate the scenarios described by the sequences (2.11) and (2.12) in order to obtain analytical solutions for g (t ) and for the patterns g (z). Standing waves We will derive the next symmetric solutions of equation (2.8) with v g (0) = 0. In this case, it is sufficient to determine the trajectory g (t ) in the interval [0, T /2 = πR] because, as we will see below we can take advantage of the constraint v g (T /2) = 0, see equation (2.11). For t ∈ [0, r ], the solution g (t ) satisfies equation (2.8) with c = 0, v g (0) = 0, and g (0) ∈ (0, 1/k). The solution reads (2.14) For t ∈ (r, T /2] the solution g (t ) satisfies equation (2.8) for c = 0 and the initial conditions g (r ) and v g (r ) given above. The solution reads Since the right hand side of equation (2.17) is negative and w > 0, we conclude that ξ < 0. In addition, we see that holds. From w > sinh[ k(πR − r )] it follows that the factor sinh[ k(πR − r )]/w > 0 is smaller than 1 such that g (0) ∈ (0, 1/k). That is, g (0) satisfies the necessary condition for periodic solutions. Equations ( showed consistent results. Although the peak of the pattern shown in figure 2 looks kinky, from equations (2.13)-(2.15) it follows that in fact the functions g (t ) and g (z) are smooth functions at t = z = 0 (i.e., continuously differentiable). This is illustrated in the insert of figure 2 that depicts the standing wave pattern in a small region around z = 0. Let us briefly address the case r → πR. For r = πR equation (2.13) yields g (t ) = 1/k. Likewise, equation (2.19) reduces to g (0) = 1/k. This is consistent with the conclusion drawn above that for r = πR the ring channel exhibits a homogeneous concentration pattern g = 1/k. .15) and (2.19). Circles correspond to the graph g determined by a numerical solution method (see text). Insert shows a detail of the graph g (z) around z = 0. Parameters: k = 0.1, R = 5.0, r = 0.2 (as in reference [26]). Traveling waves Our next objective is to find solutions of equations (2.8)-(2.10) for c 0 which implies v g (0) 0. To this end, we will use equations (2.8) and (2.9) to determine the trajectory g (t ) and the particle velocity v g (t ) in the full period [0, T ] as function of the unknown parameters g (0), v g (0) and c. We will then use the three constraints given by equation (2.10) and by the two periodicity requirements g (0) = g (T ) and v g (0) = v g (T ) to determine g (0), v g (0) and c. Finally, we compute the bifurcation diagram for the same set of parameters a, γ, k, R, r as used to calculate the wave pattern g (z) in figure 3. However, µ was not fixed. Rather, the parameter µ was considered as control parameter and was varied in the range µ = [0, 0.2]. The traveling wave velocity c as function of µ was determined using the two aforementioned methods (MATLAB fsolve and dynamical system w 1 (t ), w 2 (t )). The propagation velocities c thus obtained are shown in figure 4. The bifurcation diagram indicates that for periodic solutions g (z) there is a pitchfork bifurcation at a critical value µ c such that at µ = µ c , the standing wave solution with c = 0 becomes unstable and two stable traveling wave solutions emerge with either c > 0 or c < 0. Discussion A potential dynamics approach was used to determine the shape of chemical concentration patterns on a ring-shaped water channel associated with a camphor disc floating on the water surface of the channel either in an immobile or self-propelling fashion. The potential dynamics approach allowed us to qualitatively determine the shape of the wave patterns and to show that the concentration patterns are bounded from above. In rescaled dimensionless units, the maximal camphor concentration of standing or traveling wave patterns cannot exceed the inverse of the decay constant k. With the help of the potential dynamics approach, analytical expressions for standing and traveling wave patterns were derived. Finally, by means of the analytical expression for traveling wave patterns, the bifurcation diagram describing the transition from standing to traveling waves (immobility to self-motion) when the friction parameter µ is decreased was determined numerically. The stable branches of the bifurcation diagram for values of µ smaller than the critical value µ c but close to µ c (i.e., µ = µ c − ǫ with ǫ > 0 small) may be described like µ c − µ = L c 2 + O(c 4 ), where L > 0 holds and c corresponds both to the camphor disc velocity and the traveling wave velocity. Taking the immobile-disc and standing-wave solution c = 0 into account, this relationship between µ and c becomes (µ c − µ) c = L c 3 + O(c 5 ) and describes a supercritical pitchfork bifurcation. This relationship was analytically derived in a previous study assuming natural boundary conditions [26]. Consequently, our considerations indicate that a supercritical pitchfork bifurcation can also be observed in the case of periodic boundary conditions. In the aforementioned study it was also shown that under natural boundary conditions and when the radius of the camphor disc becomes large, the velocity c and the control parameter µ can be related like (µ c − µ) c = L 1 c 3 + L 2 c 5 + O(c 7 ) with L 1 < 0 and L 2 > 0. When the solution c = 0 is ignored, the simplified, truncated relation µ c −µ = L 1 c 2 +L 2 c 4 describes the branches of self-propelling camphor discs associated with traveling wave concentration patterns. It has the shape of a W when considering ǫ = µ c − µ as a function of c. Consequently, this second relationship describes a subcritical pitchfork bifurcation when considering c as a function of ǫ = µ c −µ. The subcritical bifurcation predicts that for appropriately chosen values of µ, there are two different propagation velocities c > 0 (and likewise two velocities c < 0). We were not able to find a counterpart of this observation for the case of periodic boundary conditions. That is, the two numerical methods described in section 2.3 to determine the bifurcation diagram did not indicate the existence of a W shaped relationship between µ c − µ and c. In this context, it is important to note that in reference [26] it was shown that the range of values µ for which two different positive (or negative) velocities c exist is relatively small. Therefore, the issue of a subcritical pitchfork bifurcation for solutions of the reaction-diffusion equation (1.2) subjected to periodic boundary conditions may be addressed by more sophisticated analytical or numerical methods that go beyond the scope of the present study.
5,703.6
2014-12-01T00:00:00.000
[ "Engineering" ]
Analytical finite element matrix elements and global matrix assembly for hierarchical 3-D vector basis functions within the hybrid finite element boundary integral method . A hybrid higher-order finite element boundary integral (FE-BI) technique is discussed where the higher-order FE matrix elements are computed by a fully analytical procedure and where the gobal matrix assembly is organized by a self-identifying procedure of the local to global transformation. This assembly procedure applys to both, the FE part as well as the BI part of the algorithm. The geometry is meshed into three-dimensional tetrahedra as finite elements and nearly orthogonal hierarchical basis functions are employed. The boundary conditions are implemented in a strong sense such that the boundary values of the volume basis functions are directly utilized within the BI, either for the tangential electric and magnetic fields or for the asssociated equivalent surface current densities by applying a cross product with the unit surface normals. The self-identified method for the global matrix assembly automatically discerns the global order of the basis functions for generating the matrix elements. Higher order basis functions do need more unknowns for each single FE, however, fewer FEs are needed to achieve the same satisfiable accuracy. This improvement provides a lot Introduction The Finite Element Boundary Integral (FE-BI) method (Jin, 2002;Tzoulis and Eibert, 2005;Eibert and Hansen, 1997) is an efficient numerical technique for solving electromag-netic field problems.Traditional finite element methods rely on utilizing the local information of the FEs.The fixed local node order forces the local matrix elements to be transformed into global ones.Facing low order (LO) basis functions, the local-global transformation is easy as edge related elements only follow the edge directions.When it comes to higher order (HO) basis functions (Jin, 2002;Sun et al., 2001;Ismatullah and Eibert, 2009;Jorgensen et al., 2004;Graglia et al., 1997;Jorgensen et al., 2005), the basis functions are also related to faces, volumes or even more complicated structures.The local-global transformation procedure introduces then considerably more difficulties.In this paper, a self-identified hierarchical basis function method is illustrated.This method effectively overcomes the problem mentioned above and provides more feasibility within FE-BI.Without fixing the node order or the sequence order of the basis functions for the local FEs, the self-identified hierarchical basis function organization allows a simple assembly of the global equation system.Simultaneously, this method guarantees the compatibility between FE and BI (Ismatullah and Eibert, 2009) fluently.All arbitrarily shaped components are meshed into tetrahedra (Volakis et al., 1998) apart from perfect electric conductors (PEC) or perfect magnetic conductors (PMC) where E and H are forced to vanish inside the volume.As FE-BI solves for the field distribution inside the volume together with the corresponding equivalent surface currents (Ismatullah and Eibert, 2009;Rao et al., 1982), the self-identified hierarchical basis functions describe the distribution of fields within the tetrahedra.When observation points tend to the enclosed boundary surface, the boundary condition determines the continuity of E and H fields (Harrington, 1961;Bladel, 1964;Mautz and Harrington, 1978).So it is useful to guarantee that the basis functions for FE are the same as the basis functions for the BI. Derived from the equivalence theorem, basis functions for BI are used to compose equivalent 2-D surface currents (Rao et al., 1982;Chew et al., 2001;Ylä-Oijala and Taskinen, 2003).The currents are relevant to the surface unit normal vector and the polarization of the fields, thus the tangential component of FE basis functions on the surface is perpendicular to the current basis functions.With respect to the corresponding sources (electric current J s or magnetic current M s ) and the surface normal unit vector, the corresponding subspaces ensure the compatibility between FE and BI. The LO basis functions have shortcomings when the simulation accuracy and the large number of unknowns are considered.Precision and efficiency of LO are difficult to improve with increasing numbers of unknowns.The solution with LO basis functions demands the mesh size to be around λ/20 to λ/8.With coarser meshes, the elements may introduce inaccurate waveforms for field reconstruction. The well-known mixed order basis functions are successful for electromagnetic field distributions and surface current reconstructions.Rao-Wilton-Glisson (RWG) (Rao et al., 1982) basis functions are inherited as LO.As RWG is very effective for BI, it is implemented as the first order of Rotational Subspace (0th order) in FE tetrahedra.The Nedelec HO basis functions also form the first order of Gradient Subspace (1st order), the second order Rotational Subspace (2nd order), an so on, where this paper is restricted to 2nd order.Apart from BI, LO and HO basis functions are also utilized within FE and they improve the accuracy for field computations.In FE-BI, 0th and 1st order basis functions for FE and BI are easy to match.Both of them are edge related and follow the same direction of the edge vector.The situations for 2nd order basis functions are more complicated.2nd order basis functions are face related, so that to achieve compatibility between FE and BI, the basis functions of them have to maintain the same global node order.The tetrahedral FE basis functions defined in Table 1 have a format represented by subscripts k = (i, j ) and k = (r, s, t) as illustrated in Fig. 1.As elements of matrices need row and column positions, the subscripts (m i , m j ) and (m r , m s , m t ) are introduced for row basis functions and subscripts (n i , n j ) and (n r , n s , n t ) are assigned to column basis functions.In this work, (m i , m j ) and (n i , n j ) contain the global order of local node numbers for edge related basis functions, whereas (m r , m s , m t ) and (n r , n s , n t ) represent the global order of nodes for face related basis functions, where k = (i, j ) and k = (r, s, t) store the local node number of finite elements. In FE analysis, the self-identified hierarchical basis functions are derived from the geometrical information of the tetrahedra.In the mesh file, the data structure for each tetrahedra contains six edge identities and four face identities.Each edge is constructed with two node numbers and the order of the two nodes determines the edge global direction.Every face has the identity of the corresponding outside boundary triangles, inside volume triangles or inside boundary triangles.The outside boundary triangles are described by three edges in certain directions.The first edge gives out the first two nodes of the outside boundary triangle, the last node can be found through another two edges.The order of these three nodes are inherited as global node order.The inside volume triangles and inside boundary triangles are constructed by three nodes directly and the node order is viewed as the global node order.Through the index of edge and face identities, the tetrahedral FEs can easily consult the corresponding edges and triangles.Thus the determined global node order can be set for the LO and HO basis functions.As shown in Fig. 1, (m i , n i ) always represent the starting point of the edge, (m j , n j ) represent the ending point and (m r , n r ), (m s , n s ), (m t , n t ) represent the node order of the triangle.Practically, the local node numbers are arrayed in the unique global order and assigned to the corresponding subscripts.When generating the system matrices, the assigned subscripts are set to the corresponding positions into the list of basis functions.Thus elements of system matrices are automatically assigned to global edges and faces.If HO basis functions are implemented into FE, the matrix elements can be calculated analytically and precisely.Since these results are commonly not available in FE literature, it is a major contribution of this paper to present these analytical matrix elements up to 2nd order.As the order of basis functions is enlarged, the accuracy of the boundary integral (BI) should also be improved.As it turns out, the integration order for the testing surface integrals should be increased, the adaptive numbers of quadrature points in the singularity cancellation technique have to grow and larger maximum numbers for spherical harmonic expansion terms are needed within the Multilevel Fast Multipole Method (Eibert, 2005). HO achieves satisfactory accuracy with larger mesh size and it provides a better solution for non uniform finite elements.Good radar cross section (RCS) results of PEC structures coated by dielectric materials are acquired.Since LO is inherited by the hierarchical bases, orthogonality or nearorthogonality of basis functions is usefull for HO.Based on the structure of tetrahedra, system matrices built from nearly orthogonal basis functions converge faster and the solution is more accurate, so that the flexibility of mesh size provided by HO gives a more feasible solution.Meanwhile, HO improves the accuracy and also reduces the number of finite elements. FE-BI solutions for coated spheres and the Flamme aircraft referring to the self-identified hierarchical nearly orthogonal basis functions are explicitly illustrated.The material of the layered sphere is homogeneous, isotropic and lossy.A variety of FE-BI simulations up to 3 million unknowns based on self-identified hierarchical basis functions are presented.The accuracy of HO testing cases is good, and the simulation results based on HO basis functions are also compared with LO situations. Finite element variational formulation Consider the configuration of an arbitrary volume as shown in Fig. 2. A typical FE model consists of a finite volume V a with possibly anisotropic and inhomogeneous materials inside.The materials are characterized by relative permittivity ↔ r (r) and relative permeability ↔ µ r (r).A d is the assembled enclosed envelope and n(r) is the normal vector pointing out of V a . Assuming a field expansion E = u n α n and H = i n α n as well as a suppressed time dependence e j ωt , a linear system of equations (Jin, 2002;Tzoulis and Eibert, 2005) is achieved as (1) The system matrices [R mn ], [S mn ], [T mn ] and the right hand side vector [w m ] are defined as where a m and a n are field basis functions, k 0 = ω √ 0 µ 0 is the free space wave number, Z 0 = √ µ 0 / 0 is the free space intrinsic impedance, and J d is a volume current density source. The hierarchical basis functions are based on the normalized barycentric (simplex) coordinates in FE tetrahedra (λ 0 , λ 1 , λ 2 , λ 3 ), where λ 0 + λ 1 + λ 2 + λ 3 = 1.The functions are also related to the gradients ∇λ i (i = 0, 1, 2, 3) and the volume of the tetrahedron V T .Equation ( 2) is related to the curl of the basis functions.Equations ( 2) and (3) form the system matrices related to electric field unknowns [u n ].Equation ( 4) generates the matrices related to surface magnetic field unknowns [i n ] where the integral region is the envelope of V a .Equation ( 5) is related to the current excitation inside the volume.As V a = V e and A d = A e , integration intervals of Eqs. ( 2), ( 3), ( 4) and ( 5) are the combination of finite elements.With simplex coordinates, the properties of basis functions allow for an analytical solution of the matrices in Eq. ( 1). 3-D HO self-identified basis functions The general format of FE-BI basis functions for different orders is displayed in Table 1.A i (i = 0, 1, 2, 3) is the local face area vector related to surfaces of the tetrahedron pointing into the volume, V T is the tetrahedral volume.Face vectors are independent in the tetrahedron and the surface tangential part of field basis functions compared with surface current basis functions are perpendicular.The field basis functions distribute inside the finite element volume.When tending to the tetrahedral boundary edges and faces, the corresponding tangential parts of edge basis functions for 0th and 1st order exist on the adjacent connected surfaces, they are zero on other edges or faces.The tangential components of face based functions for 2nd order only exist on the related face, but vanish on the other unrelated faces and on all edges. The compatibility between FE and BI is easily achieved by utilizing self-identified hierarchical bases defined by global node order.The identified hierarchical basis functions α i are composed of rotational first order (0th order) represented by a 1 , gradient first order (1st order) represented by b 1 and rotational second order (2nd order) related to c 2 and d With self-identified hierarchical basis functions, analytical solutions for system matrices can be achieved through the properties of simplex coordinates in tetrahedra.The inner products of simplex coordinates (Lapidus and Pinder, 1982) are given as The dot and cross products between face vectors are defined as Combining identified basis functions and their properties (Table 1, Eqs. 6-10), the system matrix [R mn ] is calculated in Table 2 and [S mn ] is given in Table 3. From Eqs. ( 2) and (3), matrices [R mn ] and [S mn ] are symmetric, so that the elements at symmetric positions in Tables 2 and 3 are identical and it is explicit that the functions are the same along columns and rows. System matrices [T mn ] depend on the surface boundaries of the volume, so that the dimension of the hierarchical space is reduced.In fact Eq. ( 4) can be written as and the hierarchical space turns out to be With properties of surface triangle finite elements (Lapidus and Pinder, 1982), it is shown that Based on Eq. ( 11), [T mn ] is computed as From Eq. ( 5), [w m ] can be written as where the current source J d can be used directly for the product with self-identified 3-D basis functions and vector [w m ] can be easily solved.Thus, with analytical solutions for the hierarchical basis functions, the system matrices of FE integrals can be generated analytically and this avoids any numerical error accumulations and generates accurate matrices.(Sun et al., 2001). Surface integral formulation With Huygen's principle (Ismatullah and Eibert, 2009;Harrington, 1961;Chew et al., 2001), the radiation sources can be replaced by equivalent electric surface currents J s and magnetic surface currents M s on the volume enclosed boundary A d for evaluation of free space radiation.The numerical solution for the surface currents are based on the well known EFIE, MFIE and CFIE (Ismatullah and Eibert, 2009;Rao et al., 1982;Ylä-Oijala and Taskinen, 2003), given as MF I E : where the vector operators and K are calculated as are 309 644.The running time was 4 763.4 s.For 2nd order, the mesh size is set 0.1 m, and the mean edge length is 10.93 cm, with minimum edge length 6.48 cm and maximum edge length 18.08 cm.The total unknowns are 70 448.The running time was 419.6 s. The mesh size of LO is around λ/8 and the mesh size of HO is enlarged up to around λ/3. HO with coarser mesh and fewer unknowns achieves also accurate result as LO with finer mesh. Coated Sphere II For the second coated sphere testing case, numerical RCS from different orders are also compared with MIE scattering. 415 The second coated sphere contains a PEC sphere core with radius 0.5 m, and the PEC core is enclosed with a dielectric layer with thickness 0.0025 m.The properties of the dielectric layer are presented with ǫ r = 2.5 − 0.5j and µ r = 1.0.The incident wave is 3 GHz and propagating toward +z di- ( E inc and H inc are incident electric and magnetic fields, G 0 is the scalar Green's function in free space and α is a number between zero and one.J s and M s are given as Thus the electric and magnetic currents can be written as With Ylä-Oijala and Taskinen, 2003), J s and M s are expanded as where f n is the 2-D surface hierarchical basis function for BI, i n and u n in Eq. ( 27) are coefficients with respect to the surface elements and the total number of BI unknowns is The discretized solution for MoM can be achieved through Galerkin's process and fast solutions as given by (Ismatullah andEibert, 2009, 2008;Notaros, 2002;Eibert, 2005;Notaros, 2008) can be utilized.The definition of HO f n is in 2-D derived from 3-D HO a n , thus the system matrices from MoM and FE will be compatible based on the same geometrical structure information of the object.The system matrices from BI are described in Ismatullah and Eibert (2009). operated on a Server with Intel(R) Xeon(R) CPU E5630 @ der of self-identified basis functions for FE-BI are compared with MIE scattering as shown in Fig. 3.The coated sphere consists of a PEC core with radius 0.9 m and a layer of dielectrics with thickness 0.1 m.The dielectric layer properties are given by ǫ r = 3 − 0.1j and µ r = 1.0.The incident wave is 550 MHz and propagating towards +z direction and the electric field is 100 V/m along x direction (E x = 100 V/m). For 0th order, the mesh size was set to 0.04 m in Hypermesh software (HyperWorks ( 2012)), and the mean edge length is 4.65 cm, with minimum edge length 2.24 cm and 400 maximum edge length 8.11 cm.The total unknowns are 154 822.The running time was 1 314.8 s.For 1st order, the same mesh as for 0th order was utilized.The total unknowns tric layer are presented with ǫ r = 2.5 − 0.5j and µ r = 1.0.The incident wave is 3 GHz and propagating toward +z di-420 rection and the electric field is 100 V/m along x direction (E x = 100 V/m).The results for the RCS are shown in Fig. 4. 4. Bistatic RCS of coated sphere II @ 3 GHz on xz cut half plane (ϕ = 0 • ). Linear algebraic equation system To solve the electric field [u] and the magnetic field [i], the subsystem from FE in form of Eq. ( 1) and the subsystem generated by BI (Ismatullah and Eibert, 2009) must be combined as a complete system.The BI subsystem based on EFIE may introduce resonances into the final system, so it is necessary to utilize CFIE with similarly satisfiable accuracy.As a result, the subsystems can be regarded as where is the sub-matrix derived from FE-BI for corresponding u and i. M FE u is square, symmetrical and sparse, M FE i is rectangular and sparse, M BI u is rectangular and fully staffed, M BI i is square, symmetrical and fully occupied, V FE,BI are excitation vectors for FE-BI.Thus, the complete combined system is written as The complete system solves the electric and magnetic fields simultaneously, thus equivalent surface electric and magnetic currents can be achieved. Numerical results To testify the accuracy of the analytical matrix elements and the global matrix assembly in FE-BI, several numerical simulation results are shown in this section.A cogent demonstration is to utilize a coated sphere testing case, where a PEC sphere is enclosed by a layer of dielectric material.The analytical RCS is well known as MIE Scattering (Balanis, 1989).Good matching of RCS between analytical solution and numerical method verifies the efficacy of FE-BI.As a more complicated testing case, a second sphere is displayed.With higher frequency and finer mesh, more unknowns are handled.Moreover, an example of FE-BI application in very large scale simulations is shown through the RCS of the Flamme aircraft.As 0th order of FE-BI has been verified in many published articles (Tzoulis and Eibert, 2005;Eibert and Hansen, 1997;Eibert, 2007), it can be utilized as a reference for HO FE-BI.Efficiency of FE-BI based on different orders of self-identified basis functions are presented. The sphere simulations were performed on a PC with Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83 GHz processor, installed memory (RAM) 16.0 GB and 64-bit operating system.The simulation of the Flamme aircraft was operated on a Server with Intel(R) Xeon(R) CPU E5630 @ 2.53 GHz (2 processors), installed memory (RAM) 96.0 GB and 64bit operating system.All simulations were computed on one core. Coated sphere I For the coated sphere, the RCS from 0th, 1st and 2nd order of self-identified basis functions for FE-BI are compared with MIE scattering as shown in Fig. 3.The coated sphere of a PEC core with radius 0.9 m and a layer of dielectrics with thickness 0.1 m.The dielectric layer properties are given by r = 3 − 0.1j and µ r = 1.0.The incident wave is 550 MHz and propagating towards +z direction and the electric field is 100 V/m along x direction (E x = 100 V/m).For 0th order, the mesh size was set to 0.04 m in Hypermesh software (HyperWorks, 2012), and the mean edge length is 4.65 cm, with minimum edge length 2.24 cm and maximum edge length 8.11 cm.The total unknowns are 154 822.The running time was 1314.8 s.For 1st order, the same mesh as for 0th order was utilized.The total unknowns are 309 644.The running time was 4763.4 s.For 2nd order, the mesh size is set 0.1 m, and the mean edge length is 10.93 cm, with minimum edge length 6.48 cm and maximum The mesh size of LO is around λ/8 and the mesh size of HO is enlarged up to around λ/3. HO with coarser mesh and fewer unknowns achieves also accurate result as LO with finer mesh. Coated sphere II For the second coated sphere testing case, numerical RCS from different orders are also compared with MIE scattering.The second coated sphere contains a PEC sphere core with radius 0.5 m, and the PEC core is enclosed with a dielectric layer with thickness 0.0025 m.The properties of the dielectric layer are presented with r = 2.5 − 0.5j and µ r = 1.0.The incident wave is 3 GHz and propagating toward +z direction and the electric field is 100 V/m along x direction (E x = 100 V/m).The results for the RCS are shown in Fig. 4. For 0th order, the mesh size was set to 0.01 m, and the mean edge length is 0.858 cm, with minimum edge length 0.250 cm and maximum edge length 1.631 cm.The total unknowns are 411 339.The running time was 3525.6 s.For 1st order, the same mesh as 0th order is utilized.The total unknowns are 822 678.The running time was 4271.3 s.For 2nd order, there are two testing cases.One uses the same mesh as 0th and 1st order.The total unknowns are 1 973 604.The running time was 7725.5 s.Another simulation utilizes mesh size 0.03 in Hypermesh, and the mean edge length is 2.554 cm.The minimum edge is 0.250 cm and maximum edge length 4.316 cm.The total unknowns are 202 522.The running time was 1357.6 s. The numerical RCS results are compared with MIE.The mesh size of LO is around λ/8, while HO with finer mesh size is also working well though a lot more unknowns are settled.When the mesh size of HO increases up to around λ/4, HO with coarser mesh for FE-BI maintains good precision as the results from LO with finer mesh as shown in Fig. 4. Flamme The Flamme case is an application of FE-BI for very large scale simulation.The Flamme is located in the xy plane, with nose heading along the +x axis, as shown in Fig. 5.The Flamme is enclosed by a layer of lossy dielectric material with thickness of approximately 1 cm.The permittivity of the dielectric material is r = 1.21 − 10j and the permeability is µ r = 1.The simulation frequency is 2.5 GHz.The incident plane wave propagates towards −x direction, with electric field (E z = 100 V/m).To visualize absorbing effects of the lossy dielectric material, a PEC Flamme simulated with BI with 0th order of basis functions is utilized for comparison.The RCS of PEC and layered Flamme in different cut planes are shown in Figs.6-10.The PEC Flamme is simulated through BI with 0th order of self-identified basis functions, the layered Flamme is simulated through FE-BI with 0th, 1st and 2nd order of self-identified basis functions.As the efficacy of 0th order with finer mesh has been verified, here it is used as a reference.The RCS comparison shows that most of input power goes over the Flamme.In scattered directions, the scattered power is evidently absorbed by the dielectric material.For PEC, the mesh size of the PEC Flamme was set to 0.01 m, and the mean edge length is 1.011 cm, with minimum edge length 0.100 cm and maximum edge length 2.559 cm.The total unknowns are 692 952.The number of BI electric currents is 692 952 and the number of BI magnetic currents is 0. The number of levels for MLFMM is 8 and the peak memory consumption was 4083.746MBytes.The running time was 75 529.3s.For 0th order, the mesh size of the layered Flamme was set to 0.01 m, and the mean edge length is 1.083 cm, with minimum edge length 0.044 cm and maximum edge length 2.671 cm.The total unknowns are 2 081 547.The number of BI electric currents is 690 960 and the number of BI magnetic currents is 602 284.The number of levels for MLFMM is 8 and the peak memory consumption was 11 384.43MBytes.The run time was 23 511.6 s.For 1st order, the mesh size of layered Flamme was set to 0.02 m, the mean edge length is 1.770 cm, with minimum edge length 0.065 cm and maximum edge length 3.933 cm.The total unknowns are 1 252 430.The number of BI electric currents is 427 144 and the number of BI magnetic currents is 360 976.The number of levels for MLFMM is 7 and the peak memory consumption was 8825.734MBytes.The running time was 12 141.3 s.For 2nd order, the mesh size was set to 0.02 m, and the mean edge length is 1.770 cm, with minimum edge length 0.065 cm and maximum edge length 3.933 cm.The total unknowns are 2 941 242.The number of BI electric currents is 712 284 and the number of BI magnetic currents is 602 004.The number of levels for MLFMM is 8 and the peak memory consumption was 18 536.68MByte.The running time was 47 871.3 s. Conclusion Self-identified hierarchical 3-D vector basis functions were generated for the hybrid finite element (FE) -boundary integal (BI) technique, where analytical solutions for the FE matrix elements have been presented up to 2nd order.Self-identified basis functions provide feasibility for FE and effectively maintain compatibility with BI.Going from 1st to 2nd order, FE-BI allows for a mesh size increase from λ/8 up to λ/3.From coated sphere testing cases, good accuracy was found and the Flamme simulations displayed that FE-BI based on self-identified basis functions can be applied for very large scale simulations. Figure 1 . Figure 1.The definition of subscripts based on a single tetrahedron.Every vertex index can be used as a row index m or as a column index n. Figure 2 . Figure 2. The general geometrical configuration for finite elementboundary integral method. Figure 5 . Figure 5.The geometry of a Flamme airplane. Figure 11 . Figure 11.Electric surface current density |J | in A/m distribution of a covered Flamme airplane with plane wave incidence @ 2.5 GHz. 2 .Properties of identified hierarchical basis functions are displayed in Table 1.(i, j ) and (r, s, t) in Table 1 contain the local node numbers in the series defined by global order.Also, the curl of identified hierarchical basis functions is given. Table 1 . Hierarchical basis functions and properties within the tetrahedron Table 2 . Analytical solution for [R mn ] matrix elements. Table 3 . Analytical solution for [S mn ] matrix elements.
6,451.2
2014-11-10T00:00:00.000
[ "Computer Science" ]
MUMFORD–SHAH-TV FUNCTIONAL WITH APPLICATION IN X-RAY INTERIOR TOMOGRAPHY , 1. Introduction. The primary motivation of this research came from our attempt to further explore the interior problem of X-ray CT. This problem, which arises from technical limitations of the scanning apparatus and requirement of X-ray dose reduction, amounts to reconstructing the object image in a region of interest (ROI) from truncated projection data (interior data) pertaining to the ROI. Conventional wisdom asserts that the interior problem is severely ill-posed since it is not uniquely solvable [18,28]. Yet microlocal analysis indicates that the singularities of the object image in the ROI are visible in a stable way [32] from interior data by singular value decompositions [25,26] or Lambda tomography [15]. With recent advancement of CT theory, one can pursue the accurate reconstruction of the ROI with interior data by incorporating some priori knowledge of the object image. Specifically, if the object image in a subregion of the ROI is known a priori, then the ROI can be exactly reconstructed via analytic continuation method [24,41]. However, this condition is just satisfied on few occasions so as to narrow down the application of this method. A universal priori knowledge is the sparsity of image representation, which leads to the total variation (TV) minimization method for the X-ray interior problem. The ROI can be reconstructed exactly by TV minimization method under the assumption that the object image is piecewise constant in ROI [42,43]. As an extension to TV minimization method, high order total variation (HOT) minimization method was proposed to reconstruct a piecewise polynomial ROI image [40]. Since TV was introduced for image restoration [35], TV minimization method has gradually become a common tool for imaging and image processing problems in the compressed sensing scheme. For piecewise smooth image, the total variation includes two parts: edge and smooth terms, where |u + − u − | denotes the image jump across the discontinuity set S u of u . TV regularization method can be represented as the following unconstrained optimization problem: (2) min u αTV(u) + Au − g 2 . Various algorithms, such as gradient projection method [35], second-order method [38] and split Bregman iteration [17], were proposed to resolve this problem. TV regularization is well known for recovering sharp edges of object image. But the edge set is not a direct result of this method. Edge information is always helpful in practical applications. The goal of this investigation is to present an image reconstruction scheme to separate the edge from the smooth part of the image while taking advantage of TV regularization, and then apply it to X-ray interior problem. As a successful approach to deal with the image edge, Mumford-Shah (MS) functional was originally proposed for image denoising and segmentation [27], where A is the identity operator, SBV(Ω) is the space of all special functions of bounded variation [2]. Since for any u ∈ SBV(Ω), S u is L N -negligible, the integral domain of ∇u is written as Ω in MS(u). MS functional aims to approximate the object image by a piecewise smooth function. With A being a forward operator to the measurement data g, MS functional and its variants have been applied to some other image processing and imaging problems, such as image inpainting [14], image deblurring [4], electric impedance tomography [34], X-ray tomography [33], electron tomography [22] and SPECT [23]. The regularizing properties of MS functional have been well studied in previous works [13,20]. Since MS functional in its original setting is quite sophisticated, some approximate formulations, such as level set [11], graph cut [8] and convex relaxation [31], were employed in various problems for the purpose of numerical realization. A more popular approach is to approximate the MS functional by Blake-Zisserman energy [7,10] or Ambrosio-Tortorelli elliptic functional [3] in the sense of Γ-convergence [9]. For the object image of piecewise constant, we endeavor to recover the image and its edge simultaneously. Incorporating MS functional with TV, we propose a new functional named as Mumford-Shah-TV (MSTV), where SBV c a,b (Ω) is a subset of SBV(Ω). Comparing with the results about the MS functional in [20], we introduce an extra constraint Ω |∇u| 2 dx ≤ c in SBV c a,b (Ω) to guarantee the existence of solutions to MSTV functional, where c > 0 is a constant. We systematically study the existence and stability of the minimizers of MSTV functional. Accordingly, we present an Ambrosio-Tortorelli type approximation for MSTV functional in the sense of Γ-convergence. Some related work were reported to directly apply L 1 regularization to the Blake-Zisserman [16,19] and Ambrosio-Tortorelli [36] type approximations of MS functional. In a different way, we apply L 1 regularization in MS functional directly. MSTV regularization is prone to reconstruct a piecewise constant image and preserve the sharp edges. But the reconstructed image is not restricted to piecewise constant as the levelset method for MS regularization. We apply the MSTV regularization method to the X-ray interior problem and develop an algorithm based on split Bregman and OS-SART iterations. The remaining part of this paper is organized as follows. In §2, we define the MSTV functional and study the existence and stability of the minimizers of MSTV functional. We present an Ambrosio-Tortorelli type approximation for it in the sense of Γ-convergence and apply it to the X-ray interior problem. In §3, physical experiments are used to validate the MSTV regularization method and the algorithm we proposed. Finally, we discussed some related issues and conclude in §4. 2. Mumford-Shah-TV functional and its application in X-ray interior problem. 2.1. Mumford-Shah functional. We will always assume that Ω is a bounded open subset of R N . A function u ∈ L 1 (Ω) is bounded variation if the distributional derivative Du is representable by a finite Radon measure in Ω [2]. The space of functions of bounded variation is denoted by BV(Ω). We say a function u ∈ L 1 (Ω) is approximately continuous at x ∈ Ω if there existsũ(x) ∈ R such that where B ρ (x) = {y ∈ Ω : |y −x| < ρ}, andũ is called the approximate limit of u. The set of points without an approximate limit is L N -negligible and is denoted by S u . If u ∈ BV(Ω), for H N −1 -a.e. x ∈ S u , there exist u + (x), u − (x) ∈ R and ν ∈ S N −1 such that ∇u is called the approximate gradient of u. For any u ∈ BV(Ω), the distributional derivative Du can be decomposed into three parts Here, ∇uL N and (u + − u − )ν u H N −1 S u are the Lebesgue integrable and jump parts respectively, and D c u is the so-called Cantor part. A function u ∈ BV(Ω) is called a special function of bounded variation if D c u = 0. The space of all special functions of bounded variation is denoted by SBV(Ω) and has closure and compactness theorem described as follows. Theorem 2.1 (Closure and compactness theorem in SBV(Ω), [2]). Let p > 1 and {u k } k∈N be a sequence in SBV(Ω) satisfying that Then there exist a subsequence, still denoted by {u k } k∈N , and some u ∈ SBV(Ω) such that Let Θ be an open subset of R M , A : L 2 (Ω) → L 2 (Θ) a continuous operator, g ∈ L ∞ (Θ) and The MS functional for imaging and image processing problems is defined as [20] where α and β are two positive parameters, · denotes the L 2 -norm throughout this paper. One can obtain a piecewise smooth approximation of the object image by minimizing MS(u) in SBV(Ω) ∩ X b a (Ω). MS regularization is algorithmically challenging because of the different nature of u and S u . A popular approach is to approximate MS(u) by a sequence of simple functionals in the sense of Γ-convergence. Definition 2.2 (Γ-convergence, [9]). Let X be a metric space, we say that F ε : X → [−∞, +∞] Γ−converges to F : X → [−∞, +∞] in X as ε → 0, if for any sequence {ε k } k∈N converging to 0 and u ∈ X, the following two conditions are satisfied, 1. for every sequence {u k } k∈N ⊂ X converging to u, An approximation by Ambrosio-Tortorelli elliptic functional is given as follows: otherwise. and otherwise. Then is usually minimized by an alternating scheme: It is worth noting that there is a diffusion term Ω (v k |∇u|) 2 dx in (18). The edge may be blurred after some iterations in the numerical realization [21]. 2.2. Mumford-Shah-TV functional. TV regularization is widely used in imaging and image processing problems particularly for the object image of piecewise constant. Our goal is to recover a piecewise constant image while taking into account the edge information as well. Incorporating MS functional with TV, we define a new functional named as Mumford-Shah-TV, where c is a positive constant and Without loss of generality, we will always assume that a, b and c are three fixed positive constants. The condition of the closure and compactness theorem 2.1 should be satisfied by at least one minimizing sequence of MSTV(u). Otherwise, the minimizer of MSTV(u) may not exist. Hence, we introduce an extra constraint Ω |∇u| 2 dx ≤ c to the image u, and this constraint can always be satisfied in practical situations. Then, we obtain the existence of one minimizer of MSTV(u) in SBV c a,b (Ω). Theorem 2.4. There exists at least one minimizer of MSTV(u) in SBV c a,b (Ω). Proof. Let {u k } k∈N be a minimizing sequence of MSTV(u) in SBV c a,b (Ω), namely, (Ω), by Theorem 2.1, there exist a subsequence, still denoted by {u k } k∈N , and some u * ∈ SBV(Ω) such that Since Ω is bounded and The weak convergence ∇u k ∇u * in [L 2 (Ω)] N also implies weak convergence in [L 1 (Ω)] N . By the semicontinuity of weak convergence, we obtain that It follows that u * ∈ SBV c a,b (Ω) and (29) MSTV Thus, u * is a minimizer of MSTV(u) in SBV c a,b (Ω). In order to study the stability of the minimizers of MSTV(u), we introduce the weak convergence in SBV(Ω). Definition 2.5 (Weak convergence in SBV(Ω), [12]). We say that a sequence {u k } k∈N ⊂ SBV(Ω) weakly converges to u in SBV(Ω) if u ∈ SBV(Ω) and Let D 1 and D 2 be two subsets of Ω, we denote D 1⊂ D 2 if H N −1 (D 1 \D 2 ) = 0 and denote D 1= D 2 if D 1⊂ D 2 and D 2⊂ D 1 . Then, we introduce a notion of convergence of sets based on the weak convergence in SBV(Ω). Definition 2.6 (σ-convergence, [12]). Let {E k } k∈N be a sequence satisfying that E k ⊂ Ω for every k ∈ N and sup k∈N H N −1 (E k ) < +∞. We say that {E k } k∈N σ-converges to E ⊆ Ω in Ω if the following conditions are satisfied, 1. if {u i } i∈N weakly converges to u in SBV(Ω) and S ui⊂ E ki for some subsequence {E ki } i∈N , then S u⊂ E; 2. there exists a sequence {u k } k∈N ⊂ SBV(Ω) weakly converging to some u in SBV(Ω) such that S u= E and S u k⊂ E k for every k ∈ N. By the same argument as used in [20], we obtain the stability of the minimizers of MSTV(u) based on the weak convergence in SBV(Ω) and σ-convergence of sets. Theorem 2.7. Let {g k } k∈N ⊂ L ∞ (Θ) be a sequence converging to g in L 2 (Θ), and u k a minimizer of Then there exist a subsequence, still denoted by {u k } k∈N , and some u * ∈ SBV c a,b (Ω) such that {u k } k∈N weakly converges to u * in SBV(Ω), {S u k } k∈N σ-converges to S u * , and u * minimizes MSTV(u; g) in SBV c a,b (Ω). Proof. The proof is analogous to the proof of Lemma 8 of [20] and is omitted here. An approximation formulation of MSTV(u) is necessary for the minimization of MSTV functional to address the problem due to the edge term H N −1 (S u ). Proof. Let {η k } k∈N be a positive sequence converging to 0 and u k a solution of the following problem (34) min By the regularizing properties of MS functional [20], there exists at least one solution for problem (34) and H N −1 (Ω ∩ (S u k \ S u k )) = 0. Using the theory of constrained problems [5], we have u k → u in L 2 (Ω). For every k ∈ N, By Theorem 2.1, we can extract a subsequence, still denoted by {u k } k∈N , such that ∇u k ∇u weakly in [L 2 (Ω)] N and Together with (35), we deduce that If there exists a subsequence, still denoted by {u k } k∈N , such that Ω |∇u k | 2 dx ≤ c, then by (39), (41) and the continuity of A, we conclude that u k ∈ SBV c a,b (Ω) and MSTV(u k ) → MSTV(u) as k → ∞. Otherwise, we may assume that Ω |∇u k | 2 dx > c for every k ∈ N. By (38), we have Ω |∇u k | 2 dx → c. Letũ k = ζ k u k + (1 − ζ k )a with (42) ζ , and ζ k → 1. It is easy to check thatũ k ∈ SBV c a,b (Ω) for every k ∈ N. Then, we have By (43) and (44), we haveũ k → u in L 2 (Ω) and Ω |∇ũ k | dx → Ω |∇u| dx. It follows that MSTV(ũ k ) → MSTV(u) as k → ∞ and the proof is completed. Then, we present an Ambrosio-Tortorelli-type approximation for the MSTV functional in the sense of Γ-convergence. and Then, G ε Γ-converges to G in [L 2 (Ω)] 2 as ε → 0. Proof. For any ε k → 0, we may assume that sup k∈N G ε k (u k , v k ) < +∞. Since Moreover, let ξ ∈ S N −1 and D be an open subset of Ω, for L N −1 -a.e. z ∈ ξ ⊥ and any δ > 0, there exists a closed set K δ z,ξ ⊂ R such that L 1 (K δ z,ξ ) < δ anḋ where u z,ξ k (t) = u k (z + tξ),u z,ξ k denotes the approximate gradient of u z,ξ k , and D z,ξ = {t ∈ R : z + tξ ∈ D} with D z,ξ = ∅. By (50) and (51), we have for L N −1a.e. z ∈ ξ ⊥ , both {(v z,ξ k ) 2u z,ξ k } k∈N and {v z,ξ ku z,ξ k } k∈N weakly converge tou z,ξ in L 2 (D z,ξ \ K δ z,ξ ), which implies that Let δ → 0, we obtain that Using Fatou's lemma and (54), we have Using the same argument to (55), we obtain that By the continuity of A, we have Au k − g 2 → Au − g 2 . Then, we conclude that u ∈ SBV c a,b (Ω) and obtain the lim inf-inequality. We now prove the lim sup-inequality and assume that G(u, v) < +∞. We first consider the case that H N −1 (Ω ∩ (S u \ S u )) = 0. Let d(x, S u ) = inf{|x − y| : y ∈ S u } and define the following two functions, otherwise. (59) It is easy to verify that which follows that (u k , v k ) ∈ H c a,b (Ω). By the same argument as used in the proof of Proposition 5.2 of [3], we have that Using again the continuity of A, we have Au k −g 2 → Au−g 2 . By (61) and (63), we obtain the lim sup-inequality. Finally, we complete the proof by applying Lemma 2.8 and standard diagonal argument for the case that H N −1 (Ω ∩ (S u ∩ S u )) > 0. 2.3. Application in X-ray interior problem. We now apply MSTV functional to the X-ray interior problem in R 2 . The object image u 0 is assumed to be compactly supported and the ROI is a disk Ω r = {x ∈ R 2 : |x| < r}. The X-ray interior problem is to reconstruct the ROI from the truncated projection data (64) Ru 0 (θ, s) = θ·x=s u 0 (x) dx, θ ∈ S 1 , |s| < r. By Theorem 2.9, we reconstruct the ROI by solving the following problem: Here, v is the edge image of u and indicates the edge set S u . In discrete settings, u, v and g are usually represented as vectors of two indexes: where ∆ and ∆ d denote the space sampling interval of the image and the detector respectively, and ∆ a denotes the angular sampling interval of the scanning orbit. Correspondingly, the discrete projection operator is a matrix A = {a (k,l),(i,j) } (N d Na)×(NxNy) . The discrete representations of X b a (Ω) and H c a,b (Ω) are given byX where ∇u = (δ x u, δ y u) is the first order finite difference with Then, the discrete representation of (66) can be written as Combining with the algorithm we proposed to solve the L 1 -regularization problem in CT imaging in [45], we resolve (73) by the following scheme: where 0 < η < 1, and S(u k ) denotes the result of one OS-SART iteration [39] to solve Au = g. P K denotes the projection operator onto a convex set K, and (78)V k+1 = v ∈ R NxNy : v∇u k+1 2 ≤ c . We solve problems (74) and (76) using split Bregman iteration and conjugate gradient (CG) method respectively. The projection ontoX b a can be performed by simple point-wise truncation: The projection ontoV k+1 can be represented as the following problem, By the method of Lagrange duality [6], (80) is equivalent to the dual problem, is the Lagrange dual function of (80). Problem (81) can be solve using Newton method [29]. Together with the result in Theorem 2.3, and using the same scheme as (74)-(77), MS functional can be minimized as follows, where (83) where u 0 is the object image, µ u and σ u are the mean value and standard deviation of image u within the ROI, respectively, and E rec and E SSIM are used to evaluate the difference and similarity between u and u 0 . In practical applications, parameters a and b can usually be estimated using prior knowledge, and an appropriate choice of α and β is made in an empirical way. However, an accurate estimate of parameter c is usually difficult and a big value is preferred. Particularly, c is set to be +∞ in our experiments. The regularization parameter settings are listed in Table.3. Reconstruction results by OS-SART iteration, TV, MS and MSTV regularization methods are shown in Fig. 4. The reconstructed images of TV and MSTV regularization are piecewise constant approximately and preserve the sharp edges of the object image. The dot matrix within the phantom implies that TV and MSTV regularization methods have the best performance in spatial resolution for object image of piecewise constant. The edge of the object image within the ROI was captured exactly by both MS and MSTV regularization method. Since the edge is smoothed while solving the diffusion problem (83), the edge captured by MS regularization is broadened. The curves of E rec and E SSIM are plotted in Fig. 5, which show that the image reconstructed by MSTV regularziation is the most accurate one in these two criterions. Physical experiments are also conducted to test our method. A frozen chicken is scanned using cone beam geometry, and the middle slice is reconstructed using fan beam projection data. 720 views are measured in angle range [0, 2π), and the other geometry parameters of scanning are the same as the numerical simulation. The object image can be modeled as a piecewise constant function approximately, and are scanned for twice in 130 kVp, 3.5 A and 130 kVp, 0.5 A, respectively. Table 1. Parameter settings of numerical and physical experiments. The reconstructed images are represented by a 512 × 512 matrix with width of 140 mm. The ROI is a disk of radius 48.45 mm. The reconstructed images using non-truncated projection data and 10 OS-SART iterations are shown in Fig. 3. While reconstructing the ROI using data of normal dose, we further reduce the scale of data to 180 × 254, namely, 180 views and 254 detector units are used to reconstruct the ROI. It makes the reconstruction more challenging. Reconstruction results of these four regularization methods are shown in Fig. 4. The edge image of MSTV regularization contains more details of the object image compared with that of MS regularization in which some edge with small jump size is missed. The reconstructed images of low-dose data are shown in Fig. 5. Compared with the result of normal dose, the reconstruction quality of low-dose data decreases significantly due to strong noise. It is well known that both TV and MS regularization have property of noise reduction. Since MSTV functional shares many common properties with them, it is understandable that MSTV regularization possesses the property of noise reduction. 4. Discussion and conclusion. MSTV regularization can also be incorporated with some other methods by applying different data term, such as Poisson-ROF denoising model for CT imaging [46] and L 1 -norm data term [37]. Blake-Zisserman type approximation and more regularization properties of MSTV functional will also be considered in our future work. In this paper, we defined the MSTV functional and apply it to the interior problem of X-ray CT. We study the existence and stability of minimizers to the MSTV functional. We present an Ambrosio-Tortorelli type approximation for the MSTV functional in the sense of Γ-convergence for the purpose of numerical realization. An algorithm based on split Bregman iteration and OS-SART is developed for the MSTV regularization method. Numerical and physical experiments demonstrate that both the image and its edge can be reconstructed with high-quality.
5,325.6
2018-02-27T00:00:00.000
[ "Physics", "Engineering", "Computer Science" ]
MEDITERRANEAN JOURNAL OF HEMATOLOGY AND INFECTIOUS DISEASES www.mjhid.org ISSN 2035-3006 Review article Medical Treatment of Hepatocellular Carcinoma Hepatocellular carcinoma (HCC) is the fifth most common neoplasm and the third leading cause of cancer-related deaths worldwide. Cirrhosis, most often due to viral hepatitis, is the predominant risk factors for HCC and geographical differences in both risk factors and incidence are largely due to epidemiological variations in hepatitis B and C infection. Hepatic function is a relevant parameter in selecting therapy in HCC. The current clinical classification of HCC split patients into 5 stages, with a specific treatment schedule for any stage. As patients with early stages can receive curative treatments, such as surgical resection, liver transplantation or local ablation, surveillance program in high-risk populations has become mandatory. Sorafenib, a multikinase inhibitor, has recently shown survival benefits in patients at advanced stage of disease. Hopefully, new molecular targeted therapies and their combination with sorafenib or interventional and surgical procedures, should expand the therapeutic armamentarium against HCC. Hepatocellular carcinoma (HCC) is the fifth most common neoplasm and the third related deaths worldwide. Cirrhosis, most often due to viral hepatitis, isk factors for HCC and geographical differences in both risk factors and incidence are largely due to epidemiological variations in hepatitis B and C infection. Hepatic function is a relevant parameter in selecting therapy in HCC. The current clinical ssification of HCC split patients into 5 stages, with a specific treatment schedule for any stage. As patients with early stages can receive curative treatments, such as surgical resection, liver transplantation or local ablation, surveillance program in high-risk populations has become mandatory. Sorafenib, a multikinase inhibitor, has recently shown survival benefits in patients at advanced stage of disease. Hopefully, new molecular targeted therapies and their combination with sorafenib or interventional and surgical procedures, should expand the therapeutic armamentarium against HCC. carcinoma (HCC) currently ranks fifth in global cancer incidence representing the third cause of cancer-related 90% of cases HCC develops on the background of cirrhosis or chronic liver inflammation (hepatitis) and it is currently the ng cause of death among cirrhotic patients 2 . Risk factors and etiologies vary among geographical regions: chronic hepatitis C viral infection (HCV) represents the predominant risk factors in western countries and Japan, while chronic hepatitis B viral infection (HBV) is the n Asia and Africa. Hepatitis B carriers are 100 times more likely to develop HCC than the uninfected persons with an annual incidence of 0.5% in non of 2-6% in cirrhotic patients established HCV-related cirrhosi HCC is between 2%-8% per year New in the epidemiology of HCC, especially in the west, is the emerging role of the metabolic syndrome, related to obesity and insulin resistance, as an important risk factor along with the established role of high consumption of alcohol Differently from other neoplasms, HCC behaviour is rather peculiar with prognosis determined not only by the tumoral disease but also by the severity of the underlying liver disease. Hepatocellular carcinoma (HCC) is the fifth most common neoplasm and the third related deaths worldwide. Cirrhosis, most often due to viral hepatitis, isk factors for HCC and geographical differences in both risk factors and incidence are largely due to epidemiological variations in hepatitis B and C infection. Hepatic function is a relevant parameter in selecting therapy in HCC. The current clinical ssification of HCC split patients into 5 stages, with a specific treatment schedule for any stage. As patients with early stages can receive curative treatments, such as surgical resection, risk populations has become mandatory. Sorafenib, a multikinase inhibitor, has recently shown survival benefits in patients at advanced stage of disease. Hopefully, new molecular targeted therapies and their and surgical procedures, should expand the develop HCC than the uninfected persons with an annual incidence of 0.5% in non-cirrhotic cases and 6% in cirrhotic patients 3,4 In patients who have related cirrhosis the incidence of 8% per year 5 . New in the epidemiology of HCC, especially in the west, is the emerging role of the metabolic syndrome, related to obesity and insulin resistance, as an important risk factor along with the wellestablished role of high consumption of alcohol 6 . Differently from other neoplasms, HCC behaviour is rather peculiar with prognosis determined not only by the tumoral disease but also by the severity of the underlying liver disease. Until o, HCC was a malignancy typically diagnosed at an advanced stage with a very poor prognosis. Nowadays, a wider range of therapeutic options can be offered including surgical resection, orthotopic liver transplantation (OLT), percutaneous ablation procedures, intra-arterial treatments and, more recently, molecular targeted therapies 7 . In addition, since it has well established that treatment is more effective when HCC is diagnosed at an early stage, efforts to improve diagnostic process and surveillance policy have assumed a major role in the management of HCC 8 . In this article we report a summary of the most recent information on novel advancements in the treatment of this neoplasm. Surveillance and Diagnosis: The aim of HCC surveillance is to reduce mortality from the disease. A randomized controlled trial showed that HCC surveillance with liver ultrasound and serum alfafetoprotein (AFP) every 6 months improved survival in Chinese patients infected with HBV irrespective of the presence of cirrhosis 9 . Results from two european cohort studies confirm the potential benefits of this policy, especially in countries with high prevalence of disease 10,11 . Surveillance for HCC should be performed using ultrasonography. Ultrasound has been reported to have a sensitivity of between 65% and 80% with a specificity greater than 90% when used as a screening test; this performance, at present, is superior to that of any of the available serologic tests 10 . With regard to AFP, a value of 20 ng/mL has shown a good balance between sensitivity and specificity. However, at this level the sensitivity is only 60%, and therefore AFP alone should not be used for sceening 12 . Proteomic research represents now a promising way but at present valid tumor markers have still not reached the clinical setting. Surveillance with ultrasound every 6 months for detection of early HCC is recommended in cirrhotic patients and other specific risk groups 5 ( Table 1). According to published guidelines, a diagnosis of HCC can be made in patients with cirrhosis who have a nodule greater than 2 cm identified on a dynamic imaging technique (CT scan, contrast ultrasound or MRI with contrast) with a typical vascular pattern (i.e., hyper-enhancement in the arterial phase with wash-out in the portal/venous phase) 5,13 . In patients with nodule greater than 2 cm but an atypical vascular pattern on imaging, a biopsy is recommended. Nodules between 1 and 2 cm presenting characteristic arterial enhancement features with venous wash-out on 2 different imaging modalities should be considered as HCC. Whether the typical vascular pattern is detected on a single imaging technique, a biopsy should be performed. Nodules found on ultrasound surveillance that are smaller than 1 cm should be followed with ultrasound every 3 to 4 months to detect growth suggestive of malignant evolution. If the hepatic lesion remains stable after 2 years, the patient can return to the standard surveillance of every 6 to 12 months. In clinical practice the characterization of small hepatic nodules by imaging techniques remains difficult in a not negligible percentage of cases. Results from a previous study of this group have show that non-invasive criteria for diagnosis of HCC are satisfied in only 60% of small nodules. Moreover, it should be also considered that truly hypovascular HCC do exist, particularly in nodules of 1 to 2 cm where they reach the figure of 17% 14 . Staging: An accurate staging of the tumor before considering treatment of HCC is mandatory to estimate prognosis and decide which therapy may offer the greatest survival potential. The so-called Barcelona-Clinic Liver Cancer (BCLC) is the most widely accepted staging system (Figure 1). It includes different variables (tumor stage, liver function, physical status, cancer related symptoms) and links staging with treatment modalities and with an estimation of life expectancy that is based on published response rates to the various treatments 15 . This system stratifies patients into 5 stages that identify the ideal candidates for the therapies currently available. Patients with a single lesion less than 2 cm, who are asymptomatic and do not show vascular invasion or satellites represent the socalled "very early" HCC. This is a very welldifferentiated HCC which correlates, from the pathological perspective, with the carcinoma "in situ" stage. The absence of microvascular invasion and dissemination offers the highest likelihood of cure. Results from Japanese studies have shown in these patients excellent outcome in terms of survival. Ninety percent and 71% after 5 years can be achieved with surgical resection or radiofrequency ablation (RFA), respectively with a recurrence rate of 8% at 3 years 16,17 . "Early-stage" disease is characterized by preserved liver function (Child-Pugh class A or B), and refers to HCC whitin the Milan criteria: single HCC of 5 cm or Genetic hemochromatosis Patients with alpha1-antitrypsin deficiency, non-alcoholic steatohepatitis, autoimmune hepatitis have an increased risk of HCC. However, because of paucity of data, no recommendations for or against surveillance can be made. less, or up to 3 lesions less than 3 cm each, without vascular infiltration neither lymph node or distant metastases 18 . These patients can be effectively treated by therapies having curative intents such as surgical resection, liver transplantation (OLT) or local ablation, with a reported 5-year survival rates of 50-75% 18,19,20 . The choice of therapy is not univocal and it should be based on severity of the liver dysfunction, presence of portal hypertension, medical comorbidities. Patients with compensated cirrhosis and without HCC-related symptoms or vascular invasion that are outside of the criteria of "very early" or "early" stage correspond to the "intermediate" stage HCC. Untreated patients at this stage present a median survival of 16 months. Transarterial chemoembolization (TACE) raises the median survival of these patients to 19-20 months and is now regarded the standard care at this stage 21 . Patients who have symptomatic tumors and/or invasive tumoral pattern (vascular invasion/extrahepatic spread) identify the "advanced" stage. Until two years ago, lack of treatment options for patients at advanced stage produced a median survival of 6 months. Sorafenib, a multityrosine kinase inhibitor, has recently become the standard of care for these patients as it is the only treatment proven to prolong the survival in this setting 22 . Patients with extensive tumor involvement leading to severe cancer-related symptoms and/or major impairment of liver function (Child-Pugh C) are considered as "end stage". These patients have a life expectancy less than 3-6 months. They are not likely to benefit with any of the treatments aforementioned and should be treated only with symptomatic therapy. Treatment: Treatment of HCC is multidisciplinary and involves hepatologists, oncologists, surgeons, and interventional radiologists. Several factors such as severity of underlying liver disease, tumor burden, physical status, associated diseases as well as availability and expertise in surgical and ablative therapies, should be considered before deciding therapeutic strategy. Treatments for HCC have been conventionally divided into curative and palliative. Curative treatments such as surgical resection, liver transplantation and percutaneous ablation, achieve complete responses in a high proportion of patients and are expected to improve survival. Palliative treatments are not aimed to cure but can obtain good response rates and also prolong survival in some cases. Despite a wide diffusion of surveillance strategy among cirrhotic patients, only 30-40% of patients in western countries can undergo curative treatments 23 . Resection is the treatment of choice for HCC in non-cirrhotic patients who represent however just 5% of cases in the west 24 . In the absence of cirrhosis, surgical resection of this tumour is less likely to produce liver failure. Patients with HCC and concomitant cirrhosis are not all suitable for resection because of the risk for hepatic decompensation after surgical resection. Strict selection criteria are therefore required to avoid hepatic decompensation. Nowadays this procedure is recommended only in patients with a single HCC and a well preserved liver function (Child Pugh A), with a normal bilirubin level and in the absence of portal hypertension (defined as the presence of oesophageal varices, platelet count of less than 100.000/µl or splenomegaly). Due to progress in surgical treatment, in these subjects survival rates at 5 years exceedes 70 % while treatment related mortality is less than 3% 25 . Tumor recurrence complicates 50% of cases at 3 years and 70% at 5 years after resection and are due to either growth of intrahepatic metastases understaged at pre-operative imaging studies or to the development of "de novo" tumors 20,26,27,28 . Recurrence due to understaged HCC are usually observed within the first two years of follow-up, while metachronic tumours develop later. Non-anatomical resection, presence of microscopic vascular invasion and histological differentiation, are well-known predictor of recurrence and survival 29 . Thus, differently from percutaneous ablation, surgical resection can provides relevant informations about risk of recurrence and allows a risk-based enlistment for OLT. Gene signature identification in liver tissue adjacent to the tumor, has been recently reported as promising tool to identify the patients at highest risk for recurrence of HCC, with the aim to adopt intensive clinical follow-up or chemopreventive strategies in such patients 30 . The substantial recurrence rate has led to explore the potential benefit of adjuvant therapy, including systemic or hepatic-artery chemotherapy or chemoembolization, after resection. At present, however, definitive data and firm conclusions in this setting are still lacking 31 . It has been shown that partial hepatectomy, when patients met the Milan criteria, allows a 5year survival rate of 78.2%, as in the case of patients underwent OLT, a procedure which is instead dependent on availability of organs. Therefore, partial hepatectomy should be regarded as the appropriate strategy to treat well selected patients 32 . Liver transplantation, in theory, would be the optimal therapeutic option for HCC as it simultaneously removes the tumor and underlying cirrhosis thus minimizing the risk of HCC recurrence. The best candidates for liver transplantation are those satisfying the Milan criteria. In these patients the survival rate after 5 years exceeded 70%, with a recurrence of less than 15% 18,19 . Organ allocation policy is based on the Model for End-Stage Liver Disease (MELD) score with the aim to transplant patients with the highest short-term risk of mortality. At present, the great limit of this therapeutic option is the organ shortage which increases waiting time and leads about 20% of patients to drop out before receiving the transplantation 19,33 . No definitive therapeutic strategies have been validated at present to delay tumor progression while listing. TACE, however, seem to reduce tumor burden and to delay progression even if it can precipitate liver failure 34 . Percutaneous ablation is cost-effective when waiting time is longer than 6 months but the risk of tumor seeding should not be disregarded 35 The major questions currently related to liver transplantation for HCC are whether Milan criteria can be safely expanded, how to increase the scarce supply of donor organs, to assess the benefit of adjuvant therapies, such as percutaneous ablation and chemoembolization or systemic therapy, whilst patients are on the waiting list [36][37][38][39][40][41][42][43][44][45] . Non-surgical treatment of hepatocellular carcinoma Patients who are not candidates for surgical treatments, due to either tumor characteristics or underlying liver disease, can be considered for locoregional therapies, such as ablation, chemoembolization and radioembolization, or new therapies. Percutaneous ablation: Because of several contraindications and the mortality and morbidity of surgery, alternative treatments have been developed. Percutaneous ablation include several minimally invasive techiniques consisting in the injection of chemicals (absolute ethanol, acetic acid, boiling saline) in the HCC nodule or its thermal destruction (radiofrequency, microwave, laser and cryoablation) 46 . Generally it has performed under ultrasound guidance but it can be also done under CT or MRI control or during laparotomy or laparoscopy. These techiniques are inexpensive, require short hospitalization time and present rare complications 47 . Major contraindications are presence of ascites, severe hemostasis disorders (platelet count less than 50.000/mm 3 , prothrombin activity less than 50%), severe liver impairment, neoplastic vein thrombosis. 48 Treatment efficacy, defined as absence of contrast uptake on a dynamic imaging technique (CT scan, contrast ultrasound or MRI with contrast), can be evaluated one month after the procedure. Percuteneous ablation, usually by radiofrequency ablation (RFA) and percutaneous ethanol injection (PEI), is the best therapeutic option in patients with early, unresectable HCC. 49,50 PEI, the first percutaneous technique introduced in clinical practice, is widely available and a welltolerated procedure with few side effects, but requires repeated injections on separated days. The rate of complete necrosis is closely correlated with tumor size; PEI has been reported to induce chemical coagulative necrosis in 70% to 80% of solitary HCC sized 3 cm or less and in almost 100% in tumors less than 2 cm; however, its efficacy drops to 50% in lesions between 3 and 5 cm likely because of the presence of septa inside the lesion that limit the spread of ethanol 46,48 . RFA achieves thermal coagulation using an alternating electric current. It produce a similar level of response as PEI in tumors less than 2 cm but with fewer sessions and may determine better results in tumors greater than 2 cm. 49 . According to guidilines of American Association for the Study of Liver Diseases (AASLD), RFA is the first option to be considered in tumors greater than 2 cm 51,52 . Moreover, a recent meta-analysis of randomized controlled trials comparing PEI with RFA has demonstrated that RFA offers a better 3-year survival than PEI (63% to 81% in RFA-treated patients versus 48% to 67% in PEI-treated patients) 52 . The main limit of RFA is the rate of adverse events which is higher than that with ethanol. Mortality rates ranging from 0.1 to 0.3% and severe complication rate up to 10% [53][54][55] . Major complications included peritoneal bleeding, pneumothorax, intestinal perforation, liver abscess, pleural effusion and bile duct stenosis 56 . Specific contraindication for the use of RFA include: poorly differentiated tumors or subcapsular location due to a higher seeding risk, location of the tumor adjacent to the gastrointestinal tract, large bile ducts and large blood vessels 53 . Recent studies have tried to compare PEI or RFA with surgical resection in paients with early HCC candidates to surgery. Although no definitive answer can be drawn, percutaneous ablation seem to achieve similar survival and disease-free survival than surgical resection, but with lower morbidity [57][58][59][60] . Similar to surgical resection, percutaneous ablation shows high rates of recurrence during follow-up that may reach 70% at 5 years 48 . Transarterial Chemoembolization: Transarterial chemoembolization (TACE) is considered as the first line non-curative therapy for non-surgical patients with large/multifocal HCC who do not have vascular invasion or extrahepatic spread (BCLC intermediate stage). The rationale of this procedure is based on the knowledge of tumor vascularization: tumors > 2 cm mainly receive their blood supply from the end branches of hepatic artery and not from portal vein 61 . The complete procedure associates injection in the hepatic artery of a cytotoxic drug mixed with lipiodol (ethiodized oil), an oily contrast agent, followed by embolization usually by means of absorbable gelatin (Gelfoam) particles. Cytotoxic drugs are preferentially delivered in the tumor and, when mixed with lipiodol, are progressively delivered inside the tumor. The most frequently used cytotoxic drugs are cisplatin and doxorubicin, which are amphiphilic and likely more retained in lipiodol, allowing an increased contact with tumoral tissue. Lipiodol remains embolized in the tumoral vessels and contributes, in addition to the embolizing agent, to ischemic necrosis of the tumor 48 . The procedure should be as selective as possible to target just the tumor and to avoid ishemic damages to the surrounding liver parenchima. Controlled randomized studies and a cumulative meta-analysis have demonstrated that in patients with unresectable HCC, TACE achieve objective responses lasting 1 to 6 months in 35% (range 16% to 61%) of cases and determine a significant benefit, when compared with best supportive care, in terms of both reduced progression rate and improved survival [62][63][64] . These results have been fundamental to recommend TACE as the standard of care for patients in the Intermediate stage of the BCLC classification ( fig.1). It must be oultlined, however, that about 90% of patients included in the RCT taken into account in the Llovet's metanalysis are in the Child Pugh class A and that these trials were mostly performed in patienrs classified as "unresectabe", including also early and advanced cases. Therefore the conclusions about widestread applicability of TACE in intermediate HCC, irrespective of Child Pugh A or B class should be critically revised. TACE is associated with adverse events in approximately 10% of treated patients; these events include ischemic cholecystitis, nausea, vomiting, bone marrow depression, and abdominal pain. Moreover, a postembolization syndrome is reported in about 50% of patients treated with TACE and includes fever, abdominal pain, and moderate degree of intestinal obstruction. Treatment-related mortality is less than 5% 65 . Even if TACE is now the standard treatment for unresectable HCC, there is still not a standardized protocol for this procedure. Randomized trials are actually ongoing to compare different types of cytotoxic drugs, drug vehicles, and therapeutic schemes. The potential benefit of TACE as neoadjuvant or adjuvant treatment in curative procedure, as well as the combined use of TACE with other procedures, are further issues under investigation. Palliative treatment also include radiation therapy, both internal and external. The use of external radiation is however limited by the low radiation tolerance of the non-tumoral liver. Better results, in terms of safety and disease control rates, are obtained with radioembolization, a brachytherapy that consists in delivering radioactive implants. Microspheres containing radionuclide Yttrium-90 (Y-90), a pure -emitter, are lodged via a catheter insertion into the hepatic artery that feed the tumors and emit local radiation with limited exposure to adjacent healthy tissue 66 . Absolute contraindications for Y-90 radioembolization are: 1) hepatopulmonary shunt that would result in an harmful dose of radiation (> 30 Gy with a single infusion or > 50 Gy for multiple infusions) to the lungs, 2) the inability to prevent embolization of microspheres into the gastrointestinal tract, and 3) a history of previous external irradiation to the liver. Differently from TACE, due to an absent or minimal embolic effect, Y-90 radioembolization can be safely performed in patients with portal vein occlusion 67 . Retrospective analyses and small noncontrolled prospective studies have shown that Y-90 radioembolization results in high disease control rate with a median survival for advanced HCC cases of 12 months 68-70. At present, however, no randomized trials comparing Y-90 radioembolization with locoregional, systemic therapies or best supportive care, have been published. New Therapies: The management of patients with avanced HCC has been characterized for decades by limited therapeutic options since both hormonal compounds and conventional cytotoxic chemotherapy has failed to show a substantial benefit for patients with HCC [71][72][73] . A better knowledge of molecular hepatocarcinogenesis and the following introduction of targeted agents that specifically act on the neoplastic pathways, have created a new therapeutic hope 74 . Very recently, positive results have been reported with the use of sorafenib, an oral multikinase inhibitor, which inhibits tumor-cell proliferation and tumor angiogenesis and increases the rate of apoptosis in a wide range of tumor models 75 . A multicentric international phase III randomized controlled trial (the SHARP trial) has recently shown a 3-month survival improvement with a manageable toxicity in patients with advanced HCC in Chil-Pugh A cirrhotic stage. The median overall survival was 10.7 months with sorafenib and 7.9 months with placebo (hazard ratio for the sorafenib group 0.69, 95% CI 0.55-0.87, p=0.005). Median time to progression was 5.5 months with sorafenib versus 2.8 months with placebo (p<0.001) 22 . The most common grade 3 drug-related adverse events observed in the SHARP study included diarrhoea and hand-foot skin reaction, both of which occurred in 8% of all patients treated with sorafenib A survival benefit of sorafenib has been also shown in a subsequent randomized phase III study conducted in Asia, in patients with advanced HCC. In this study, the median overall survival increased from 4.1 months in the placebo group to 6.2 months in the sorafenib group (hazard ratio in the sorafenib group 0.67; 95% confidence interval, 0.49-0.93; P<0.0155) 76 . As a consequence of these results, the indication to Sorafenib treatment is well established in Child A patients having advanced HCC with or without extrahepatic spread and vascular involvement. Thus, any further drug being developed in this subgroup of HCC patients will have to be compared against the reference standard sorafenib. and other studies, as well as in current clinical practice and the efficacy treatment and the frequency of adverse events seems to be similar to that of Child A patients, except perhaps for bilirubin increase; 3) Adjuvant setting after surgical or loco-regional treatment, and in combiunation with conventional treatments: these perspectives are currently under investigation in clinical trials, as well as combinations with other molecuartargeted agents and second line treatments in patients progresing under Sorafenib. Drugs that specifically target key molecules in carcinogenesis have emerged over the last decade. The molecular targeting has become a promising approach for the effective treatment of various cancers, now including also hepatocellular carcinoma. Therefore, several studies evaluating the efficacy of other molecular therapies in HCC are currently at different stages of validation in phase I, II and III clinical trials. These new therapies include molecules or monoclonal antibody blocking different molecular targets which have been shown to play a crucial role in HCC proliferation such as vascular endothelial growth factor (VEGF), plateled-derived growth factor (PDGF), epidermal growth factor (EGF), EGF receptor (EGFR), and PI3K/Akt/mTOR signaling pathway 74 . The demonstration of an altered expression of the target molecules should be mandatory in defining the inclusion criteria of future studies. This could help to define the beneficial effect in restricted but identifiable subgroups of patients and to correctly allocate patients to the specific treatment. It is desirable that the positive results obtained by sorafenib represent a first step toward new, tumor biology-based, therapeutic chances. In this regards, it is as decisive identify predictive biomarkers to select patients more likely to benefit from any specific agent. Most probably, major benefits will be obtained by combination of agents acting simultaneously on distinct molecular targets and bear great hopes for the treament of HCC in the next years.
6,176.8
2009-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Preparation and Characterization of Li-Ion Graphite Anodes Using Synchrotron Tomography We present an approach for multi-layer preparation to perform microstructure analysis of a Li-ion cell anode active material using synchrotron tomography. All necessary steps, from the disassembly of differently-housed cells (pouch and cylindrical), via selection of interesting layer regions, to the separation of the graphite-compound and current collector, are described in detail. The proposed stacking method improves the efficiency of synchrotron tomography by measuring up to ten layers in parallel, without the loss of image resolution nor quality, resulting in a maximization of acquired data. Additionally, we perform an analysis of the obtained 3D volumes by calculating microstructural characteristics, like porosity, tortuosity and specific surface area. Due to a large amount of measurable layers within one stacked sample, differences between aged and pristine material (e.g., significant differences in tortuosity and specific surface area, while porosity remains constant), as well as the homogeneity of the material within one cell could be recognized. Introduction In recent times, electrochemical energy storage has become more important, especially for usage in e-mobility applications, such as pure electrical, plug-in-hybrid or mild-hybrid vehicles. The requirements regarding long-life (usage time ≥ 10 years) and high energy density are dominantly fulfilled by Li-ion cells. Furthermore, due to their complicated aging behavior [1,2], they are the focus of many researchers for gaining better understanding of the aging process. During the lifetime of Li-ion cells, a lot of aging mechanisms [1,2] occur, which affect different components, like the anode or cathode active material, separator, current-collector or electrolyte. These mechanisms interact in a very complex way. Notably, graphite, which is mainly used as an anode material, is involved in many aging processes [1]. The literature dominantly shows well-known, but only partly understood mechanisms, like the growth of the SEI (solid electrolyte interface) at the boundary between graphite particles and the electrolyte [3][4][5], lithium deposition caused by high current or low temperature charging [6][7][8], micro-cracking of graphite-particles caused by massive electrical usage [9] and structural changes of the anode active material due to multiple reasons. To get a closer insight into the microstructure of the anode active material, we use synchrotron tomography [10][11][12][13][14][15][16][17][18][19]. Besides the qualitative impressions that one can get from the visual inspection of tomographic 3D images, we calculate the structural characteristics of the samples to obtain a quantitative statement of the state of the material samples. In particular, we look at the spherical contact distribution function for the graphite material, which is closely related to the diffusive behavior of the graphite phase. Furthermore, we calculate the tortuosity, which is an important characteristic related to the transport of ions in porous media [20]. A change in the tortuosity for degraded material can be explained by cracks and fractures in the structure. The paper is organized as follows. In Section 2, we give an overview of the necessary steps for the disassembling of different types of Li-ion cells. Furthermore, a promising approach for the surface modification of graphite to qualify the presence of lithium deposition is introduced in Section 2.4. Accordingly, we discuss the preparation of samples to stacks to improve the efficiency of synchrotron tomography in Section 3. Up to ten samples can be stacked together, in order to be measured in parallel. The experimental setup, i.e., the samples that are extracted from pristine and aged cells, as well as the necessary post-processing methods, like reconstruction and binarization, are presented in Section 4. Finally, in Section 5, we discuss the results of several structural analyses, like the calculation of tortuosity and spherical contact distribution functions. These analysis methods allow a quantitative description of the samples and show the potential of synchrotron tomography in combination with refined preparation techniques as a valuable tool for the investigation and characterization of functional materials. Extraction of Samples In this section, we describe the procedure to obtain anode samples from different types of Li-ion cells for structural analysis. For the purposes of comparability, all analyzed cells were discharged to 0% SOC (state-of-charge) using the CC-CV (constant current constant voltage) discharge procedure with the cut-off voltage given in the datasheet delivered by the manufacturer. The method for the extraction of samples from the anode material will be described in detail. Cell Disassembly To minimize degradation caused by the presence of oxygen and humidity, we disassembled all cells in a glovebox (mBraun MB-200B, Automotive Li-ion pouch cells were opened with a ceramic scalpel to avoid unwanted shorts and further structural changes. After removal of the upper pouch foil, the electrodes can be separated and analyzed optically. The aluminum housing of cylindrical cells was sliced next to the positive terminal using a self-constructed tool. Then, the positive terminal and its connection to the cathode active material were separately disassembled. Finally, the aluminum case has to be rolled down using small pliers. To ensure non-destructive disassembly, we controlled the temperature of the cell as the best indicator for shorts. The used setup consists of a PT100 thermal-resistor connected to a PicoTechology ® PT-104 data logger visualized by a common notebook. If temperature exceeded 35 • C, we did not use the cell for further analysis. Sample Selection As shown in Figure 1(a), from one pouch cell, we extracted multiple samples with a size of 10 mm × 10 mm using a ceramic scalpel. In the case of cylindrical cells, several equally-sized (10 mm × 10 mm) samples were sliced from equidistant intervals d of the jellyroll; see Figure 1 Subsequently, all samples were washed with dimethyl carbonate (DMC). Separation of Graphite Layers The structure of an anode layer used in Li-ion cells usually consists of a copper foil coated with a mixture of graphite and a binder on both sides; see Figure 2(a). Metals like copper have a high density, and therefore, X-ray beams used in synchrotron tomography are not able to pass through. To sustain better image quality from the anode sample, it is mandatory to separate the copper foil and graphite layers. Three different methods were compared. An overview is shown in Table 1. Surface Modification To achieve reproducible results, defined sample sizes and flat shapes are essential. Therefore, chemical treatment using nitric acid (HNO 3 ; 65%) yielded the best graphite monolayer [10,21]. Depending on the thickness of the monolayers, the type of degradation (e.g., lithium deposition) and the binder used by the manufacturer, we received the best samples using 5 mL of demineralized water and three to ten drops of HNO 3 , resulting in a dilute nitric acid (2%-6% ) solution. After 5 -30 s, the copper foil dissolved. Both graphite layers were washed twice with demineralized water and once using propane-2-ol (C 3 H 7 OH), while constantly paying attention to the orientation of the layers (see the pink markers in Figure 2(b)). Finally, the separated layers were stored on a small sheet of paper for at least 10 min in order to dry. Metallic lithium, which was formed as a result of electrochemical plating during the cycling of an electrode, is not visible in neutron-diffraction experiments. However, there are some paths to enhance the visibility by adding complexes and/or different metal-ions on the metallic lithium parts. With a surface modification using, e.g., glucosamines, the metallic lithium deposition can be made visible in neutron experiments. The deposition of metallic lithium is primarily a diffusion-triggered process. To verify the proposed surface modification procedure using glucosamines, the cells were cycled 20 times (the 1 C charge and 1 C discharge current at a potential range of 3 V − 4.2 V) at an ambient temperature of −10 • C, to ensure the presence of metallic lithium on the anode surface. The additives were used with a slight excess to ensure a homogeneous coating of the lithium-plated parts of the electrode. A homogeneous coating proved to be essential for the detailed investigation of the surface. N-(Methylnitrosocarbamoyl)-α-D-glucosamine (STZ; Sigma-Aldrich; see Figure 3(a)) was used for the selective modification of the surfaces ex situ. A solution of STZ in DMC (1M) was prepared in an argon-filled glovebox. About 2 wt% of a solution of predispersed surfactants (Triton X-109, Triton X-209; 1:1 by volume) in EC:DMC (1:1 by volume, 10 wt%) were added with stirring. This solution could be directly added into the electrolyte between the electrodes. For a homogeneous mixing of the additive with the electrolyte and to ensure a homogeneous wetting of the electrodes, it is important to allow a standing time of about 30 min after the injection of the STZ solution was completed. An adjacent heating step (38 • C, 15 min) initiated the surface modification. This process is schematically shown in Figure 4. Note that no electrochemical cycling was performed after the additive was added. This is the reason for the low electrochemical stability of the glucosamine, while the stability at open circuit potential is high enough for a safe preparation of the samples. FTIR microscopy was applied for the investigation of the influence of the surface modification at lateral resolution, where an HJY LabRAM HR with an FTIR module was used. While the upper spectrum of Figure 3(b) displays typical bands of the as-prepared electrode, including parts with metallic lithium, the lower spectrum in Figure 3(b) exhibits carbonyl peaks (C = O) at 1628 cm −1 and hydroxyl peaks (C-H) at about 3304 cm −1 . Significant differences could be observed between parts of the electrode where lithium-plating and pristine parts occurred. With an adjacent mapping technique, larger areas of electrodes (about 1.5 cm × 1.2 cm) were investigated to validate the surface modifying effect of STZ. The LC-MS analysis of the electrolyte showed that the consumption of consumed STZ could be correlated very well with the amount of metallic lithium that was deposited onto the surface of the electrodes. Multilayer Preparation Synchrotron tomography is a useful tool to obtain the microstructural characteristics of the Li-ion anode material. To maximize the efficiency, we prepared the anode samples in a multilayer stack. This gives us the opportunity to compare different kinds of aged cells and various anode materials from different manufactures and to verify the homogeneity of the production processes. Therefore, our approach is to stack the anode layers to measure several samples in parallel. This means that we obtain one image for all samples inside one stack. Hence, the anode samples inside one stack have to be divided sharply with a separator layer in between. The additional layers have to feature a non-particle-based microstructure for good visibility and contrast against graphite. In this work, we investigate the influence of different separation materials and stacking properties. Beside the discussed microstructural properties above, we identified the following characteristics, which are important for a promising stacking preparation: (1) thickness; (2) stability of the stack; (3) stickiness; and (4) sliceability. The properties of the investigated materials are summarized in Table 2. Layer Stacking To realize a good resolution, the optimal sample dimensions for synchrotron tomography should be a cylinder (∅ 1 mm, h: 1 mm). Thus, the thickness of the complete stack can be calculated using the following formula: d Stack = n · d anode−layer + (n + 1) · d separation with n number of anode samples per stack, d anode−layer the thickness of the anode layer and d separation the thickness of the separation material. Generally, stacking was performed by alternating separation layers (25 mm × 25 mm) and anode samples (10 mm × 10 mm). At the bottom and the top of the stack, a separation layer is essential to ensure stability. A maximum overlap occurs between all anode samples inside one stack. Furthermore, it is important to ensure the correct orientation (see Figure 2(b), pink marker) of each layer. Note that for stack preparation using Li-ion separator materials and cellulose papers, additional rapid glue (LocTite ® 4850) based on cyanoacrylates was used. Each stack was marked on the top and stored for 24 h. We assume that there is no effect on electrode morphology using cyanoacrylate-based glue. This was confirmed by comparison of adhesive tape and glue-based preparation methods; no significant differences could be noticed. Stack Slicing and Final Setup As described above, the final geometry of the anode stack should be cylindrical. To achieve an approximation of 1 mm in diameter, we applied a rectangular shape. By using the Pythagorean theorem, the length of the edge was calculated to be 0.7 mm. The sequence of the slices is shown in Figure 5 Afterwards, we were able to monitor the size of the stack by using an optical microscope (Leica) or SEM imaging; see Figure 6. Finally, the prepared anode stack was fixed on a specific sample holder with a little amount of hot or rapid glue to perform synchrotron measurement. Figure 5(b) shows the final probe, which was applied to the tomography setup. Discussion The experimental results showed significant differences among the investigated separation material groups; see Table 2. Double-sided adhesive tape showed a thickness of 35 µm, and the maximum number of the anode layer was obtained. The stability was very high, but due to a missing carrier, a stack made from this material could not be sliced. Single-sided adhesive tape exhibits the opposite behavior. Stacks consisting of microporous polymer membranes led to the highest number of anode layers. However, an additional primer (LocTite ® 770) was required, because of the poor adhesive properties of polypropylene (PP) and polyethylene (PE). Despite the application of the primer, the stability of the anode layer stack was not sufficient. The three investigated cellulose papers showed very good stacking and slicing characteristics, only differing in thickness and, therefore, in the amount of maximum anode layers per stack. As the best compromise between the maximum number of layers and stability, we selected greaseproof paper as the separation material for all further stack preparations. Synchrotron Tomography The synchrotron X-ray tomography measurements were performed at the imaging station of the BAMline [23,24]. The facility is located at the electron storage ring, BESSY II, at Helmholtz Centre, Berlin. A monochromatic synchrotron beam at an energy level of 19 keV was obtained by a W-Si multilayer monochromator with an energy resolution of about ∆E /E = 10 −2 . The X-ray energy was adapted to the thickness and absorption properties of the investigated samples. It was found that 19 keV is a good compromise between transmission intensity and contrast. A CWO scintillator with a thickness of 50 µm was used to convert the X-rays into visible light. A PCO camera with a 4008 × 2672 pixel CCD chip was employed to capture the images. An optical setup ("Optique-Peter") was used to transfer the light onto the CCD chip of the camera system [25]; see Figure 7. The used pixel size was 0.44 µm and the achieved spatial resolution about 1 µm. The field of view was about 1.7 × 1.2 mm 2 . A set of 2200 radiographic images were taken from the samples over an angular range of 180 • . Additionally, 230 flat field images (i.e., without a sample) were taken. After subtraction of the dark field signal, the radiographic projections were divided by the flat field images in order to obtain bright field-corrected (normalized) images (see Figure 8(a)). The exposure time for each radiographic projection was 3 s. The time for a complete tomographic measurement was about three hours. A proper normalization provides the transmission of X-rays through the sample according to the Beer-Lambert law: Here, I 0 and I denote the intensity of the beam in front of and behind the sample, d the transmitted distance through a certain material and µ the linear attenuation coefficient of that material at the used X-ray energy. Data Post-Processing The information of the transmission was used for the three-dimensional reconstruction of the attenuation coefficients of each voxel in the sample volume. This was done with a standard algorithm, the filtered-back projection [26]. Therefore, the images were projected back into the volume according to the projection angle. This was applied for all angular steps. As a result, the object would have been blurry. To avoid this, a high-pass filter was applied to each projection in the horizontal frequency domain (Hamming filter) before back-projection. A vertical slice through the reconstructed volume is shown in Figure 8(b). Since the contrast in the 3D synchrotron images is very high, we binarized those by global thresholding [27,28]. The 8-bit grayscale threshold is chosen to 32 for pristine and 72 for the degraded electrodes in order to obtain reasonable porosities between [0.22, 0.26] for the samples. Figure 9 shows the effect of binarization. Figure 9. 2D slices from the reconstructed grayscale (first row) and the binary (second row) images of P 1 C (left), P 2 E (center) and D 1 C (right). Structural Analysis In this section, we compute several structural characteristics for images of Li-ion cells obtained by the preparation and visualization methods discussed in Sections 2-4. This enables us to perform a quantitative comparison and a discussion of different electrode samples. Note that the considered characteristics are known to be linked to the functionality of graphite electrodes. The analysis addresses two main questions that play an important role in the investigation of Li-ion cells: 1. Can the microstructure of graphite be regarded as statistically homogeneous over the whole cell? 2. Can the influence of aging on the microstructure of graphite be characterized? To answer these questions, we take three scenarios into account where two synchrotron images for each scenario are considered. In particular, the scenarios are: (i) pristine material from the center of the cell; (ii) pristine material from the edge of the cell; (iii) degraded material from the center of the cell. In the following, we denote the binary images considered for Scenarios (i)-(iii) by P 1 C , P 2 C , P 1 E , P 2 E , D 1 C and D 2 C , where P (D) stands for pristine (degraded) electrodes and C (E) for cutouts at center (edge) regions; cf. Section 2 and Figure 1(a). Recall that these images are gained as described in Sections 2-4. For a sample of each of the three groups, see Figure 9. Note that the considered cutouts have approximately the same dimension of 660 × 550 × 50 µm 3 . The samples analyzed in this section are extracted out of a big-sized automotive EVpouch cell (50 Ah) nominal capacity, NMC-cathode, potential range 3 V − 4.2 V). The degraded cell was heavily cycled for about nine months with a time-scaled realistic load profile (see Figure 10) similar to usage in a purely electric vehicle, at an ambient temperature of 25 • C. The final cell capacity was 70% of the initial capacity, measured with a 1 C-discharge-current (1 C = Capacity nom 1 h ). Figure 10. Current profile applied to the cyclically-degraded cell. The detailed structural analysis explained below was possible due to the preparation and visualization techniques proposed in this paper. Comparison of Structural Characteristics The goal of this section is to obtain a quantitative comparison of the binary images, P 1 C , P 2 C , P 1 E , P 2 E , D 1 C and D 2 C , by computing several structural characteristics for each of these images. As a first structural characteristic, we consider the porosity, which is the fraction of the volume of voids (i.e., the volume of pore space) over the total volume [29]. The second characteristic is the specific surface area, which specifies the total surface area of a material per unit volume [29]. The results obtained for the porosity and the specific surface area are listed in Table 3. It turns out that the porosities of all considered samples are nearly identical. The same holds for the specific surface areas of the pristine electrodes (P 1 C , P 2 C , P 1 E , P 2 E ), whereas, contrarily, the specific surface areas of degraded electrodes are significantly higher than their counterparts of pristine electrodes. The microstructural characteristics of both degraded samples (D 1 C , D 2 C ) exhibit an almost perfect match. Table 3. Porosity and specific surface area computed for six selected binary images of anode layers. For a more detailed characterization of the graphite and pore space, respectively, we consider the probability density function of so-called spherical contact distances from the pore to the graphite phase and vice versa [29]. This characteristic can be interpreted as some kind of pore (particle) size distribution. The spherical contact distance of a point located in the pore phase or the graphite phase, respectively, is given by the minimum distance to the complementary phase. Note that the considered density functions uniquely determine the probability that the spherical contact distance of a randomly chosen point located in the pore phase or the graphite phase, respectively, is within a certain interval. In summary, the spherical contact distance distribution provides a good measure for the 'typical' distances from the pore to graphite phase and vice versa; cf. [29]. The computed probability density functions for the spherical contact distances from the graphite to pore phase and vice versa are visualized in Figure 12(a) and 12(b), respectively. The corresponding mean values and variances are listed in Table 4. It turns out that the density functions for the spherical contact distances computed for (P 1 C , P 2 C , P 1 E , P 2 E ) nearly coincide, whereas a large discrepancy is observed between the results for pristine and degraded electrodes. In particular, for both the spherical contact distances from the graphite to pore phase and vice versa, these distances are by trend smaller for the degraded electrodes compared to the pristine electrodes. This coherence can be explained by cracks and fractures in the microstructures of degraded electrodes. These deformations lead to a much finer dispersed graphite phase within the degraded electrodes, whereas the graphite phase in the pristine electrodes is much more aggregated. This assumption is also supported by the visual impression of Figure 9. Table 4. Mean and variance of the spherical contact distances for the pore (scdf P ) and graphite phase (scdf graphite ) in µm, as well as of the geometric tortuosity (tort) computed for six selected binary images of anode layers. Characteristic Sample Finally, we focus on the so-called geometric tortuosity; see e.g., [30][31][32]. It evaluates the tortuosity of the pathways through the pore phase in the through-plane direction. In particular, starting from a randomly chosen location on top of the porous material, the geometric tortuosity is defined as the random Euclidean length of its shortest path through the material along all possible paths within the pore space divided by the material thickness (in the z-direction). For this purpose, the set of pore space paths is represented by a geometric 3D graph. This graph is computed using the skeletonization algorithm implemented in the software, Avizo 7; see Figure 11. Figure 11. Extraction of the pore space graph via skeletonization from a 3D binary image: binary image (left); solid phase and pore space graph (center); pore space graph (right). The computed probability density functions for the geometric tortuosity are visualized in Figure 12(c), whereas the corresponding mean values and variances are listed in Table 4. As a result, we again obtain that the differences of geometric tortuosity within both groups (i.e., pristine P 1 C , P 2 C , P 1 E , P 2 E and degraded D 1 C , D 2 C electrodes) are negligible. Moreover, there exist significant deviations between the two groups, where the degraded electrodes have significantly smaller values of the length of the shortest pathways through their pore space. This can be again explained by the much finer, dispersed graphite phase within degraded electrodes. Summary In this section, we summarize the discussion of the results obtained by the structural analysis. It turns out that for all considered characteristics, the differences within the pristine and degraded groups are negligible, whereas a significant discrepancy between the two groups can be observed. In particular, we can conclude that it does not matter from which region of the tomograms the cutouts are extracted. This also indicates that the considered materials are perfectly homogeneous. Moreover, because of the structural differences between pristine and degraded electrodes, we can conclude that synchrotron tomography is an adequate method for detecting such changes. Thus, the proposed preparation and visualization techniques described in Sections 3 and 4 provide an excellent tool for a cost-and time-saving analysis of degradation processes in the microstructure of Li-ion cells. Conclusions We successfully introduced a new preparation method for the analysis of Li-ion graphite material using synchrotron tomography. The complete procedure, including cell disassembly, sample selection and extraction, as well as the proposed efficient multilayer stacking, were described in detail. Due to the discussed complex aging behavior of Li-ion cells, many investigations have to be done to gain a more detailed understanding. Particularly, the anode material is a key factor for cell-performance and the limitation of the lifetime. Since the microstructure of the active material is essential for aging characteristics, synchrotron tomography is an excellent method, because the resolution is high enough to detect the shape of particles and the differences between particles and pores in all three dimensions. The presented preparation method extends the advantages of synchrotron tomography by massively parallel measurement of samples. This results in the possibility of comparing different regions of a given cell, enhancing statistical data due to analyzing many samples from a similar area of a cell, comparing anode material from different manufactures or cells and in the lowering of costs, because less measurements are necessary. As shown in Section 5, structural analysis pointed out that aging causes significant changes in the microstructure of graphite material. Furthermore, we found out that the investigated samples from the same cell do not significantly differ from a statistical point of view. Hence, the method provides the possibility to analyze the homogeneity of the used active material. On the other hand, the difference between pristine and aged cells, with respect to the calculated characteristics, e.g., the tortuosity and sizes of pores and particles, is significant, which leads us to more detailed analyses and investigations, like (functional) particle-based modeling, to be done as future work. Furthermore, the structural characteristics of lithium deposition, which can be made visible in synchrotron tomography using the method proposed in Section 2.4, will be investigated in a forthcoming paper. Author Contributions Tim Mitsch and Yvonne Krämer disassembled battery cells and executed stack preparation (Sections 2-3). Andreas Hintennach developed surface modification described in Section 2.4. Henning Markötter and Ingo Manke carried out Synchrotron Tomography experiments and reconstruction of 3D-volume data (Section 4). Gerd Gaiselmann, Julian Feinauer and Volker Schmidt performed structural analysis and extracted characteristics of the generated data (Section 5).
6,260.8
2014-06-01T00:00:00.000
[ "Materials Science" ]
Rediscovery and reclassification of the dipteran taxon Nothomicrodon Wheeler, an exclusive endoparasitoid of gyne ant larvae The myrmecophile larva of the dipteran taxon Nothomicrodon Wheeler is rediscovered, almost a century after its original description and unique report. The systematic position of this dipteran has remained enigmatic due to the absence of reared imagos to confirm indentity. We also failed to rear imagos, but we scrutinized entire nests of the Brazilian arboreal dolichoderine ant Azteca chartifex which, combined with morphological and molecular studies, enabled us to establish beyond doubt that Nothomicrodon belongs to the Phoridae (Insecta: Diptera), not the Syrphidae where it was first placed, and that the species we studied is an endoparasitoid of the larvae of A. chartifex, exclusively attacking sexual female (gyne) larvae. Northomicrodon parasitism can exert high fitness costs to a host colony. Our discovery adds one more case to the growing number of phorid taxa known to parasitize ant larvae and suggests that many others remain to be discovered. Our findings and literature review confirm that the Phoridae is the only taxon known that parasitizes both adults and the immature stages of different castes of ants, thus threatening ants on all fronts. Almost a century ago the enigmatic taxon Nothomicrodon aztecarum (Wheeler, 1924) was raised on the basis of morphologically unusual larvae found among the brood of a carton nest ant, Azteca trigona (Dolichoderinae), from Barro Colorado Island, Panama 23 . No adult N. aztecarum were obtained and due to the similarity in larval stages, Wheeler placed Nothomicrodon in the Syrphidae as an ally of the genus Microdon whose larvae are well known predators of ant brood 24 . The true affinity of Nothomicrodon has remained unresolved because adults have not been reared and larvae have not been re-encountered since its original description. Historically, the genus has been treated as incertae sedis by syrphid experts. Cheng & Thompson 25 stated that the larva has none of the characteristics of microdontine larvae (flattened creeping sole, convex dorsal surface) and based on a suggestion from G.E. Rotheray, speculated that it might belong to the Phoridae. The most up-to-date revision of Microdontinae also treated Nothomicrodon as an unplaced taxon 26 . In this paper we report on the rediscovery of Nothomicrodon. Scrutiny of entire nests of Azteca chartifex collected in Brazil combined with morphological and molecular studies enabled us to establish beyond doubt that these larvae belong to the Phoridae and that they were endoparasitoids of A. chartifex larvae, more specifically of the sexual female (gyne) larvae. Based on these data and a literature review of phorid parasitoids attacking social insect brood, we confirm that the Phoridae is the only insect family known with species that parasitize both the adults and immature stages of their ant hosts, thereby threatening ants on all fronts. Results DNA sequencing and identification. The obtained COIa fragment comprised 653 bp and the COIb 780 bp. The top 20 closest matches of the COIa sequence identification on BOLD were all Phoridae samples (except one Agromyzidae (Diptera: Opomyzoidea) sample). The highest similarity, 88.7%, was found with an unpublished Phoridae sample. Similarities of 88.5% were found with published barcodes of two phorid flies from USA and Canada (BIN BOLD: AAM9347 and BOLD: AAN8679), both from the genus Megaselia. Blasting the COIb fragment in GenBank (www.ncbi.nih.gov, on 7 March, 2016) returned a long list of Phoridae samples as closest matches. Sequence similarity of 84% was reported for a sample of the phorids Anevrina variabilis (GenBank accession number GU559934) and Dohrniphora cornuta (HM352592). Sequence similarity of 83% was reported for two samples of the phorid Apocephalus paraponerae (AF217478-9) which is a known ant parasitoid 27 . No syrphid fly species appeared in the top 20 closest matches for both COIa and COIb sequence identification. Moreover, the neighbor-joining and maximum likelihood analyses placed the Nothomicrodon sequence among all included samples of other Phoridae (Fig. 1, Table S1), herewith confirming the identification of the sample as a phorid fly, not a syrphid. Description of Nothomicrodon third instar larva (n = 2). Overall appearance. Pear-shaped with a broad, oval-shaped abdomen and a narrow, tapering thorax; pale to brown with a coriaceous integument ( Fig. 2A); abdomen smooth except for a single pair of deep infolds across the dorsum (see Fig. S1A); head skeleton with the apex of the labium external to the fleshy pseudocephalon and comprising a pair of downwardly projecting, black, heavily sclerotized hooks, rest of the head skeleton translucent (see Fig. S2A), poorly sclerotized and lacking cibarial ridges. Description. Length about 4.5 mm (1.5 mm pseudocephalon and thorax + 3 mm abdomen), abdomen 3.5 mm wide and maximum height about 0.75 mm; width of the metathorax where it joins the broader abdomen, about a quarter the maximum width of the abdomen and at the prothoracic apex, about a fifth the width of the base of the metathorax (see Fig. S1B,C); antennae on the dorso-lateral margins of the pseudocephalon, just posterior to the apex, appearing as a pair of cylindrical, tapering structures (see Fig. S2B), maxillary palpi not recognizable in the specimens examined; ventrally, apex of pseudocephalon with a pair of inwardly directed, flange-like, cuticular projections (see Fig. S1D); pseudocephalon and thorax retractile, as revealed by folds and creases along which the integument probably collapses and/or retracts, by analogy with other larvae 28 ; probable margin between the pseudocephalon and prothorax indicated by a deep infold; prothorax elongate, about twice as long as each of the pseudocephalon, mesothorax and metathorax which are all of a similar length (see Fig. S1C); towards the rear of the prothorax on the dorso-lateral margins, are the anterior respiratory processes comprising a pair of cylindrical projections inclined forward and with the spiracles across the apex (see Fig. S1C); metathorax attached to a firm infold at the apex of the first abdominal segment by a band of thin, flexible integument; lateral and posterior margins of the abdomen pinched and with a slight, continuous beading; externally segments marked only by segmental pattern of inconspicuous sensilla, each accompanied by a single hair-like seta, abdomen otherwise unmarked except for the third abdominal segment whose boundaries with adjacent segments are marked by deep infolds across the dorsum (see Fig. S1A); anal segment crescent shaped, as revealed by the pattern of sensilla round the slight, dome-shaped posterior respiratory process; this process with four pairs of short, parallel spiracles orientated dorso-ventrally, above which are a pair of cuticular scars, the paired spiracular plates separated mid-apically by a longitudinal, slit-like infold (see Fig. S2C); entire body coriaceous, locomotory organs apparently absent; head skeleton (see Fig. S2A) 0.5 mm long, form typical for a member of the Platypezoidea 28 ; sclerotization slight except for the black, sclerotized apex to the ventral, labial arm which projects externally from the apex of the fleshy pseudocephalon in the form of a pair of stout, downwardly projecting hooks; apex of labrum and mandibles tapered, inconspicuous and insignificant relative to the much larger labial hooks; ventral and dorsal cornua elongate and parallel, not diverging; ventral cornu slightly broader than dorsal cornu; cibarial ridges absent. Life cycle and developmental stages of Nothomicrodon. Parasitized A. chartifex larvae, in both early and advanced stages of parasitoid development, can be identified by the small, oval, melanized/sclerotized scar from which the posterior respiratory process of the parasitoid projects from the host cuticle (Fig. 3A). Advanced stages of development (third instar Nothomicrodon larvae) are easily observed through the host integument ( Fig. 2B,C). Breathing holes may be located on any part of the ant larva including the head. The hole is round-oval and measures 0.07 mm in diameter on average (n = 7); its rim is raised and heavily sclerotized. Upon host dissection, eggs were found firmly attached to the host cuticle ( Fig. 3B,C, n = 2 cases, Table 1). Eggs are elliptical in form with the apical portion more acute than the base. One egg was measured: length = 1.0 mm, base = 0.37 mm and apical portion = 0.19 mm. All developmental stages of Nothomicrodon remained attached by the posterior respiratory process to the host cuticle. As with other phorid species 29 , Nothomicrodon larva has three instars. The first and second are of a whitish color and the cuticle is not sclerotized (Figs 3D and 4A). First/ second instar Nothomicrodon larvae dissected from the host measured 1.34 ± 0.09 mm (mean ± SD) in width and 1.88 ± 0.11 mm in length (n = 4). Three of these larvae had the pseudocephalic region extended, length 0.56 ± 0.12 mm. Early third instar larvae are yellow in color (Fig. 4B) and the cuticle has already the leathery aspect of the fully grown, reddish-dark brown third instars (see Fig. 2A). After feeding is completed, third instar larvae cut open the host cuticle with their labial hooks (Fig. 4C). These larvae measure 3.02 ± 0.25 mm in width and 3.51 ± 0.14 mm in length (mean ± SD; n = 8). Host caste and developmental stage targeted. Azteca chartifex larvae are oval in form and practically hairless; the mouthparts are small, the mandibles are feebly sclerotized and, as in other dolichoderine taxa, mobility is almost lost 30 . Gyne larvae of the Dolichoderinae subfamily are much larger than worker larvae 30 . The length and width of a random sample of larvae of varying sizes were obtained (n = 212). The MDA model discriminated parasitized from unparasitized larvae according to their attributes, with parasitized larvae exclusively in the larger size class, which corresponds to gyne larvae (Fig. 5, see Fig. S3). The model explained 89 and 100% of the between group variance of the variables, and correctly assigned most of the larvae (deviance 19.8, misclassification error 0.94%). Only one parasitized and one unparasitized larvae were not correctly assigned. Parasitized larvae measured on average 3.4 ± 0.3 mm in width and 4.7 ± 0.5 mm in length (mean ± SD; n = 25); unparasitized larvae measured on average 1.5 ± 0.4 mm in width and 2.1 ± 0.6 mm in length (n = 187). Nothomicrodon was found only in the nests that contained gyne larvae: no small or fully-grown minor or major worker larvae or male larvae were parasitized. Gyne parasitism rates. Samples from three nests collected in 2012 containing a total of 1,328 adults (gynes and workers) and 1,329 larvae and pupae were examined (Table 1). All three samples contained parasitized A. chartifex gyne larvae and/or free wandering Nothomicrodon larvae (Table 1). In general only one Nothomicrodon larva develops per host and in only one occasion two parasitoid larvae were observed inside the same host larva (Fig. 4D). A single Nothomicrodon puparium was examined; it was more elliptical in body shape than the larva and seemed to have contracted. This puparium had been parasitized and the parasitoid(s) had emerged and gone as revealed by an emergence hole on its surface (see Fig. S4). Rates of Nothomicrodon parasitism for the 2012 samples were calculated as the proportion of parasitized gyne larvae with respect to the total number of examined larvae of this caste (in brackets are the corrected values that take into account wandering Nothomicrodon larvae and a puparium). Rates were as follows: sample 1: 54.2% (55.6%), sample 2: 0% (30.8%), sample 3: 100%, with an overall proportion of parasitized gyne larvae of 53.9% (57.9%). Larvae from the 2015 samples were not dissected, and estimated gyne parasitism rates were far lower (Table S2), varying from 3.5 to 75.0% with an overall proportion of parasitized gyne larvae of 8.2% (8.6%). Discussion In this study we resolve the long-standing enigma of the taxonomic placement of Wheeler's Nothmicrodon. Morphological and molecular data reveal that the genus belongs to the Phoridae rather than the Syrphidae where Wheeler 23 had placed it. Furthermore, our data show that these extraordinary myrmecophilous larvae develop as endoparasitoids of A. chartifex larvae, and are specific in only developing on the fully-grown gyne larvae. The larva we studied shares numerous features with that of N. aztecarum 23 and both molecular and morphological evidence support placement within the Phoridae. For example, the larval head skeleton is of a platypezoid, not a syrphoid form. Specifically, the apex of the head skeleton is the ventral labial arm which is in the form of a pair of large, sclerotized hooks projecting from the pseudocephalon which are the main food gathering structures in platypezoid larvae 28 . The DNA sequences identities and the phylogenetic analyses unambiguously show that the larva belongs to the Phoridae. With a rate of similarity of 83 to 88.5%, our sequences, however, are not closely related to any species represented by mtDNA COI sequences in the public sequence databases, and the adult stage remains to be obtained or assessed. Species boundaries between members of the host genus Azteca are not well understood. Azteca chartifex belongs to the A. trigona group from which the type material of Nothomicrodon (N. aztecarum) was obtained. It remains possible therefore, that the same species of Nothomicrodon is associated with both A. chartifex and A. trigona. Phorids are a group of small to minute flies comprising ca. 4,300 recognized species but the more conservative estimates consider that this figure may represent only 10% or less of the total fauna when including undescribed species 29,31 . They exhibit an array of larval feeding modes, including obligatory and facultative saprophages, predators and parasitoids 29 . Phorids are parasitic on mollusks and arthropod taxa, such as arachnids, millipedes, and insects. They are well known natural enemies of pest ants 32 and adult honey bees 33 . Most phorid flies associated with ants live either as nest commensals 34 , or as parasitoids of foraging workers 19 and occasionally alate females 35 . Apart from parasitizing ants, phorids also attack other Aculeata, including bees, stingless bees and wasps 3 . Interestingly, most dipteran parasitoid species attacking social Hymenoptera parasitize the adult stage, although scattered records exist of phorids attacking the larvae of social Hymenoptera (see Tables S3 and S4). About 40% of these cases concern species which develop as ectoparasitoids of formicid and vespid larvae (see Table S3). Larval endoparasitism by phorids is almost exclusively associated with ants (see Table S4). While only two species of the phorid genus Aenigmatias (see Table S3 and references therein) are ectoparasitoids of ant larvae, a growing bulk of records now concerns ant larval endoparasitism by phorids (see Table S4 and references therein). The discovery, in this study, of a Nothomicrodon species that is an endoparasite of ant larvae hints that other instances of ant larval parasitism exist in phorids. Our results and literature search reveal that the Phoridae is the only family with parasitoid species that attack both adult ants and their broods with, in the case of Northomicrodon, a specialization for a specific brood caste, i.e. gyne larvae. Several parasitic wasps (Hymenoptera) also attack adult ants or their brood (larvae or pupae), however, this is achieved by distinct wasp families 12 . Several morphological features appear to adapt the Nothomicrodon larva to a parasitic feeding mode. The labial hooks facilitate piercing, tearing and loosening fragments of host tissue which are then sucked up by the pump in the head skeleton, and guided towards it by the relatively immobile labrum and at either side of it, the tapered mandibles. The long, parallel dorsal and ventral cornua of the head skeleton suggest that it is covered in short, wide muscles. Such a characteristic delivers a shallow but strong pumping action 36 , which is typical of many zoophagous larvae 37 . Perhaps the most distinctive feature of the Nothomicrodon larva is its pear shape with an extremely broad, smooth and apparently inflexible abdomen, contrasting with a highly narrowed, tapered, flexible and retractile thorax. Such a shape is also known in larvae of another taxon of cyclorrhaphan endoparasitoids, the Conopidae, whose larvae attack principally the adults of aculeate Hymenoptera 38 . The pear shape in conopid larvae is adaptive in that the broad abdomen resides in the abdomen of the host while the narrow thorax reaches through the petiole into the thorax to feed on the high density of muscle tissue. The pear shape of the Nothomicrodon larva appears to be similarly adapted. Flexibility in the thorax probably facilitates reaching in and around the host body in order to gather food, and might also help egression from the host after completion of larval growth. The Nothomicrodon larva might well eat the host remains surrounding its body, as occurs in other parasitoids, such as the braconid wasp Toxoneuron bicolor (= Toxoneuron nigriceps). In this endoparasitoid, post-egression feeding enhances growth and survival 39 . However, on the basis of mouthpart structure, it would likely be difficult for the Northomicrodon larva to fragment the host remains, unlike the braconid which has chewing mandibles. In the Nothomicrodon larva, the absence of locomotory structures on the ventrum of the abdomen is possibly explained by the position of the larva inside a host where locomotion is not required. The relative size of the anterior respiratory processes is surprising given the position of the thorax inside the host. In Aztc 017 0 0 1295 1170 96 1266 52 54.2 55.6 2 eggs, 13 L 1-2 , 12 early L 3 , 25 L 3 3 Aztc 032 0 0 0 0 9 9 0 0 30.8 3 1 parasitized puparium Aztc 033 7 44 26 0 10 10 10 100 100 10 L 3 4 Total 7 44 1321 1170 115 1285 62 53.9 57.9 62 10 1 Table 1. Azteca chartifex material examined for this study, gyne parasitism rate and number and developmental stage of Nothomicrodon. a Corrected to take into account the free wandering Nothomicrodon larvae and a puparium. contrast, the posterior respiratory process projects only slightly which is probably an adaptive shape in that it is less likely to become caught in host tissue. The flat ventral and slightly convex dorsal surfaces of the larva, together with their hard, leather-like cuticle, and the possibility of retraction of the pseudocephalon and thorax (the most fragile parts of the body) seem to be adaptations to live on the very hard and concave carton walls of the host nest chambers, and might well provide protection from aggressive worker ants. Whether the Nothomicrodon female places its eggs near the host (or host habitat) and the fly larva actively seeks for its host, or lays eggs directly on an Azteca ant larva within the host nest remains to be assessed. Whatever, our results show that only the fully-grown gyne larvae of the ant host are targeted, and suggest that host size may be a limiting factor to Nothomicrodon larval development. Ant parasitoids impose variable fitness costs on both individuals and colonies 13,33,40,41 . For high rates of parasitism, parasitoids may significantly reduce resource intake, colony size, and colony fitness. By exclusively parasitizing gyne larvae, Nothomicrodon parasitism directly affects the reproductive success of the colony and thereby imposes a high fitness cost to A. chartifex. Other parasites and parasitoids impose high reproductive costs on their hosts as for example, in Nosema infections of bumble bees 42 . However, fitness costs are not inevitable; not all A. chartifex colonies we studied suffered high rates of Nothomicrodon parasitism. Materials and Methods Insect sampling and preparation. Azteca chartifex adults and brood, as well as Nothomicrodon larvae, were collected in the state of Bahia in Brazil in 2012 and 2015 (SI Text: Material and Methods). Azteca workers, larvae and pupae were examined under a stereomicroscope (Nikon SMZ745T) and dissected if parasitized (Table 1). Late instar Nothomicrodon larvae found in the nest galleries along with workers and ant brood were collected and examined. Ant larvae collected in 2015 were checked only externally, without dissection, and used essentially for estimating gyne parasitism rates (see Table S4). Nothomicrodon larvae and a subsample of A. chartifex larvae (including both parasitized and unparasitized larvae) were measured to the nearest 0.1 mm (width × length) using a stereomicroscope provided with an ocular micrometer. A Mixture Discriminant Analysis (MDA) model was fitted to the A. azteca data set to test for differences in size between parasitized and unparasitized larvae. MDA is effective for selecting the suitable subclass division of a data set (Gaussian mixture of subclasses) 43 . The statistical analysis was performed using the MDA package in R 44 . An overall gyne parasitism rate was calculated taking into account all of the potential host larvae examined, with a correction for parasitoid larvae/pupae found freely in the nest chambers. Nothomicrodon larvae preserved in alcohol were prepared for description by rehydration and then fixation in Kahle's solution 38 . To examine the larval head skeleton, the apex of the thorax of a preserved larva was cut off and soaked in hot KOH for about 5 minutes. Excess tissue was removed from the head skeleton and it was washed in acetic acid and stored in glycerol. It was examined using a Wild 5 stereo-microscope in a solid watch glass containing 70% ethanol. The drawing was made with a drawing tube attached to the microscope. Terminology generally follows Rotheray & Gilbert 28 . Stacked images were obtained using Helicon Focus© (Helicon Soft Ltd). Specimens were also critical point dried and sputter coated before observation with a SM-51 TOPCON Scanning Electron Microscope. DNA sequencing, identification and clustering. Three second instar larvae of Nothomicrodon, obtained by dissection of the hosts, were used for molecular work (labelled Aztc 017B-I, Aztc 017B-II and Aztc 017B-III). DNA was extracted from a small piece of tissue (0.5-1.0 mm sample) of the larvae using the Phire ™ Tissue Direct PCR master Mix #F-170S kit (Thermo Scientific Baltics UAB, Vilnius, Lithuania) following the Dilution & Storage protocol with some modifications (SI Text: Material and Methods). The obtained COIa and COIb sequence fragments of our species of Nothomicrodon, referred as incertae sedis in Table S1, were individually blasted against the BOLD systems v3 (boldsystems.org, accessed 7 March, 2016) and the NCBI GenBank databases, respectively, using BLASTn for the sequence comparisons and identifications. Sequences produced in this study were deposited in the European Nucleotide Archive (http://www.ebi.ac.uk/ ena), accession numbers LT592267 (COIa) and LT592268 (COIb). We additionally used a dataset of COIb sequences retrieved from GenBank with the aim to test the placement of our species of Nothomicrodon among samples of the closely related cyclorrhaphan fly families using sequence clustering. The dataset comprised eight COIb sequences of Phoridae species, nine of Syrphidae, five of Platypezidae, four of Pipunculidae, one of Lonchopteridae, and used Sciadoceridae as outgroup (Table S1). The COIb sequence dataset comprised 764 nucleotides and was analyzed using Neighbor-Joining and Maximum Likelihood in software MEGA v.6 45 using the K2P and General Time Reversible 46 models of evolution, respectively. Third instar larvae of our species were further compared to the description and figures of N. aztecarum in Wheeler 23 and to the images of the paratype in the Syrphidae Community Website http://syrphidae.myspecies. info/taxonomy/term/75. Voucher material of ants and parasitoids was deposited in the following collections: Centro de Pesquisa do Cacau at Ilhéus, Brazil (CPDC collection, CEPEC/CEPLAC) (10 host workers, five third instar Nothomicrodon larvae); El Colegio de la Frontera Sur at Chetumal, Mexico (Colección de Formicidae and Colección de Artrópodos) (10 host workers, three third instar Nothomicrodon larvae); the National Museums at Edinbugh, Scotland (three third and one second instar Nothomicrodon larvae); and the Finnish Museum of Natural History at Helsinki (Finland) (three second instar Nothomicrodon larvae, 2 host workers).
5,453
2017-03-31T00:00:00.000
[ "Biology", "Environmental Science" ]
Citizen Engagement for Co-Creating Low Carbon Smart Cities: Practical Lessons from Nottingham City Council in the UK : Cities constitute three quarters of global energy consumption and the built environment is responsible for significant use of final energy (62%) and greenhouse gas emissions (55%). Energy has now become a strategic issue for local authorities (LAs) and can o ff er savings when budget cuts have threatened the provision of core services. Progressive LAs are exploring energy savings and carbon reduction opportunities as part of the sustainable and smart city agenda. This paper explores the role of citizens in smart city development as “buildings don’t use energy: people do”. Citizens have the potential to shape transitions towards smart and sustainable futures. This paper contributes to the growing evidence base of citizen engagement in low carbon smart cities by presenting novel insights and practical lessons on how citizen engagement can help in smart city development through co-creation with a focus on energy in the built environment. A case study of Nottingham in the UK, a leading smart city, is analysed using Arnstein’s Ladder of Citizen Participation. Nottingham City Council (NCC) has pledged to keep “citizens at the heart” of its plans. This paper discusses learnings from two EU funded Horizon 2020 projects, REMOURBAN (REgeneration MOdel for accelerating the smart URBAN transformation) and eTEACHER, both of which aimed to empower citizens to reduce energy consumption and co-create smart solutions. Although these two projects are diverse in approaches and contexts, what unites them is a focus on citizen engagement, both face to face and digital. REMOURBAN has seen a “whole house” approach to retrofit in vulnerable communities to improve liveability through energy e ffi ciency. User interaction and co-creation in eTEACHER has provided specifications for technical design of an energy saving App for buildings. eTEACHER findings reflect users’ energy needs, understanding of control interfaces, motivations for change and own creative ideas. Citizens were made co-creators in eTEACHER from the beginning through regular communication. In REMOURBAN, citizens had a role in the procurement and bidding process to influence retrofit project proposals. Findings can help LAs to engage demographically diverse citizens across a variety of buildings and communities for low carbon smart city development. Author Contributions: Conceptualization, S.P., M.U.M. and R.B; Data curation, S.P. and M.U.M.; Formal analysis, S.P., M.U.M., and R.B.; Funding acquisition, R.B.; Methodology, M.U.M.; Project administration, S.P.; Writing—original draft, S.P. and M.U.M.; Writing—review & editing, S.P., M.U.M., and R.B. Introduction There is widely acknowledged need to reduce energy use and carbon emissions to mitigate climate change. The built environment constitutes a significant use of final energy (62%) and is a major source of greenhouse gas emissions (55%) [1]. At the same time, cities are facing sustainability challenges, with more than half of the world's population now living in urban areas while consuming 80% of the natural resources [2]. The concept of the "Smart City" emerged as a major response to urban challenges to achieve sustainability. In the past, much of the focus has been on technological interventions, but technology alone may not be enough [3]. Local communities and citizens are often an untapped source of potential to help local authorities deliver smart city innovations. The intention of smart cities can only be met by making the citizens smart and involving them in city governance and decision-making [4]. Novel citizen engagement approaches need to be explored, both within wider society and specific organisations to aid transitions towards energy efficiency in smart cities [5]. Cortés-Cediel et al. [6] argue that there is a lack of research to know how this new governance is taking place in the citizen centric smart city arena. Therefore, the there is a need to build the evidence base of how citizen engagement can be practically embedded in the smart city activities of municipalities. The aim of this paper is to provide insights and practical lessons on how citizen engagement can help in smart city development through co-creation with a focus on energy in the built environment. Smart city refers to smart sustainable city (and low carbon) as cities cannot be truly smart without being sustainable [7]. This paper explores two distinct approaches to smart city co-creation with citizens, drawing out practical lessons for local authorities. Both are EU funded Horizon 2020 energy and carbon reduction smart city projects in Nottingham (UK): REMOURBAN (REgeneration MOdel for accelerating the smart URBAN transformation) and eTEACHER. The paper first explores the theoretical underpinnings of citizen engagement, with an emphasis on Arnstein's Ladder of Participation [8] and existing research on community and citizen involvement in smart cities and energy reduction (Section 2). After discussing the research methods used in this study (Section 3), the two case study projects are presented with analysis and reflections on lessons learnt in engaging citizens in smart city journeys (Section 4). Finally, the main research findings are discussed in relation to the existing literature with a set of conclusions and implications of the research are also discussed (Section 5). Smart City Development Smart and sustainable cities are rapidly building momentum and attracting a global spotlight [9,10]. There is little consensus on what a smart city is, and it is still considered a fuzzy concept [11][12][13], but early definitions of smart city tended to focus on the technical aspects [14]. There are a range of industrial definitions for Smart Cities, as discussed by Bull and Azzenoud [15]. Companies, notably including IBM, Schneider Electric, CISCO and Siemens, have exploited the smart city concept to market their visions of future cities, essentially the "application of complex information systems to integrate the operation of urban infrastructure and services such as buildings, transportation, electrical and water distribution, and public safety" [16]. Smart cities have become a major development area for the European Union (EU) that defines them as "systems of people interacting with and using flows of energy, materials, services and financing to catalyse sustainable economic development, resilience, and high quality of life; these flows and interactions become smart through making strategic use of information and communication infrastructure and services in a process of transparent urban planning and management that is responsive to the social and economic needs of society" [17]. Harrison and Donnelly [18] draw attention to the smart city's conceptual roots in the 1990s Smart Growth Movement. While the "Smart City" has largely been perceived as tantamount to a high-tech municipality [14], Nam and Pardo [19] note there is no single template or one-size-fits-all definition. Through the three dimensions of technology, people and institutions, they go on to lay out strategic principles of smart city development: integration of infrastructures and technology-mediated services, social learning for improved human infrastructure and governance for citizen engagement. Pham et al. [20] meanwhile suggest that there are three key factors to a smart city's success: human capital, citizen empowerment, and human interaction and involvement. Chourabi et al. [13] set out an integrative framework casting organisation, policy and technology as the main pillars of a smart city, built upon with secondary factors including governance, people, economy, infrastructure and natural environment. The Role of Citizens in Smart City Development Citizen engagement is now commonly central to smart city definitions and is said to be essential to address urban challenges [21]. However, actual practical examples are often lacking. Information and communication technologies (ICTs) offer unprecedented opportunities for expanding public participation [22,23]. Europe's manifesto on citizen engagement towards inclusive smart cities accentuates the importance of co-creating solutions. Lea et al. [24] argue that smart city projects are commonly, in practice, top-down through their application of ICT to manage city infrastructure such as transportation, traffic control and monitoring of energy and pollution monitoring. However, grassroots, citizen-driven smart city projects can deliver better value and success that can also be aided by ICT tools. The "smart" approach can sometimes view people's behaviours as an obstacle to navigate as opposed to a resource to be used. Leach et al. [25] contend that top-down technocentric projects are less likely to deliver their objectives. The former UK Department for Business, Innovation and Skills (BIS) stated that smartness makes cities more "liveable and resilient", ideally allowing citizens to engage with all public and private services on offer, in a way optimally tailored to their needs. It incorporates "hard infrastructure, social capital including local skills and community institutions, and ICT technologies to fuel sustainable economic development and provide an attractive environment to live for all" [2]. This indicates an increasing role of citizens in smart city projects as suggested by Berntzen and Johannessen [26]. In spite of this shift, Saunders and Baeck [14] report it is still common for smart city strategies to be void of meaningful engagement in the design of new energy and low carbon interventions. While policymakers and planners generally understand, and often aspire toward enabling more inclusive participatory strategic planning processes [27]. There is far less consensus as to how to make this a realisation even with the addition of digital tools Leyden et al. [28]. Indeed, even though citizens are theoretically the beneficiaries of smart city projects, traditionally they are rarely consulted about what they want and their ability to contribute [29], which Pham et al. [20] argue is often the fundamental flaw leading to failure. Expanding on this, there are numerous studies devoted to exploring the relationship between occupant behaviour and energy consumption in buildings, such as Yu et al. [30], who developed a methodology based on cluster analysis. Most studies fail to consider the role that different stakeholders can play in determining the type and extent of retrofit measures, or develop methodologies that integrate social, environmental, economic and technical concerns [31], leaving a gap for holistic, citizen-centric research. Israilidis et al. [32] also suggest that citizen-centric initiatives can be the vehicle for future smart city developments. Sovacool [33] note three benefits to citizen engagement. Besides the obvious benefit of empowerment in decision-making processes, advantages are two-way when citizens add value as "nonexperts" with higher sensitivity to important ethical components, while also becoming increasingly likely to accept change, having been involved in its design [33]. Of course, the extent and nature of citizen engagement can vary markedly in different contexts. Bull et al. [34] argue that many of the new models of smart city shift the whole emphasis of engagement from an active choice that citizens have to make to an integrated one in which citizens are providing feedback. A useful typology for explaining the levels of citizen engagement is Arnstein's ladder of participation ( Figure 1) [8] that has particular popularity in policymaking and planning [35]. It illustrates stages of involvement, ranging from the lowest category "manipulation", a form of nonparticipation which is top-down and one-way, up through increasingly meaningful forms of engagement. While "consultation" seeks opinions, it is still classed as "tokenism". The highest step is "citizen control", where participants not only influence outcomes but make decisions. Looking more specifically to the topic of smart technology, Bull et al. [34] draw attention to the increasing inclusion of citizen engagement within the definition of "smart city" itself, as they discuss the phrase's evolution over time. The role citizens play within their smart city can range from using or giving feedback within integrated systems, so they effectively become a living data point [36], to playing a meaningful role in the design of new smart city development through a co-creation or codesign approach. Co-Creation for Smart Cities The term "co-creation", sometimes used interchangeably with "co-production", is the provision of services through regular, long-term relationships between professionalized service providers and service users or other members of the community, where all parties make substantial resource contributions [37]. Bovaird [37] argues that it transcends conventional engagement and participation by actively harnessing citizens' skills and experiences. Correspondingly, Granier and Kudo [23] regard citizen engagement not "simply as a way to stimulate participation in the public debate but as a process of social innovation which aims to allow citizens to co-produce Public Value", finding it to increase "the adoption and the sustainability of public services in line with (…) the smart city's strategic vision". Co-creation offers a solution to deliver sustainable long-term benefits for public service providers and users in cities [38]. Various models have been successfully implemented in the design of smart solutions. Examples include user driven innovation (UDI), a user-centric product development process where users contribute to creation and refinement [39,40] and user-centreddesigned, which is "based upon an explicit understanding of users tasks and environments" and "driven and renewed by user-centred evaluation" characterised by a cyclical structure [41][42][43]. Depiné et al. [44] argue that Design Thinking, "an analytical and creative approach that focuses on the concerns, interests and values of the (…) citizen" plays into the vision of Human Smart Cities, which they state are the "new generation of smart cities", balancing "the hard technological infrastructure with soft factors such as social engagement and citizen empowerment". While publications around co-creation in the context of smart cities are significantly increasing in volume, there appears to be little exploration of creativity within co-creation. Few studies consider the fine details when it comes to the tools and techniques implemented during engagement sessions and how this might influence the quality of data and attitudes going forward. For example, Cellina et al. [21] launched a living laboratory for the co-creation of a mobility behaviour change app to trigger novel governance practices in cities. Granier and Kudo [23] argue that scholars have highlighted that little research has focused on actual practices of citizen involvement in smart cities, so far indicating a gap in the literature. Furthermore, Morton et al. [22] are of the view that there are few empirical studies Looking more specifically to the topic of smart technology, Bull et al. [34] draw attention to the increasing inclusion of citizen engagement within the definition of "smart city" itself, as they discuss the phrase's evolution over time. The role citizens play within their smart city can range from using or giving feedback within integrated systems, so they effectively become a living data point [36], to playing a meaningful role in the design of new smart city development through a co-creation or co-design approach. Co-Creation for Smart Cities The term "co-creation", sometimes used interchangeably with "co-production", is the provision of services through regular, long-term relationships between professionalized service providers and service users or other members of the community, where all parties make substantial resource contributions [37]. Bovaird [37] argues that it transcends conventional engagement and participation by actively harnessing citizens' skills and experiences. Correspondingly, Granier and Kudo [23] regard citizen engagement not "simply as a way to stimulate participation in the public debate but as a process of social innovation which aims to allow citizens to co-produce Public Value", finding it to increase "the adoption and the sustainability of public services in line with ( . . . ) the smart city's strategic vision". Co-creation offers a solution to deliver sustainable long-term benefits for public service providers and users in cities [38]. Various models have been successfully implemented in the design of smart solutions. Examples include user driven innovation (UDI), a user-centric product development process where users contribute to creation and refinement [39,40] and user-centred-designed, which is "based upon an explicit understanding of users tasks and environments" and "driven and renewed by user-centred evaluation" characterised by a cyclical structure [41][42][43]. Depiné et al. [44] argue that Design Thinking, "an analytical and creative approach that focuses on the concerns, interests and values of the ( . . . ) citizen" plays into the vision of Human Smart Cities, which they state are the "new generation of smart cities", balancing "the hard technological infrastructure with soft factors such as social engagement and citizen empowerment". While publications around co-creation in the context of smart cities are significantly increasing in volume, there appears to be little exploration of creativity within co-creation. Few studies consider the fine details when it comes to the tools and techniques implemented during engagement sessions and how this might influence the quality of data and attitudes going forward. For example, Cellina et al. [21] launched a living laboratory for the co-creation of a mobility behaviour change app to trigger novel governance practices in cities. Granier and Kudo [23] argue that scholars have highlighted that little research has focused on actual practices of citizen involvement in smart cities, so far indicating a gap in the literature. Furthermore, Morton et al. [22] are of the view that there are few empirical studies exploring how building user engagement can shape development of ICT-based energy efficiency interventions. Research Methods A qualitative approach was adopted for this study to enable deeper understanding of citizen engagement and its role in smart city development. The research strategy for this study was a comparative case study of two examples of engagement in Nottingham, England. A case study strategy involves an empirical investigation of a contemporary phenomenon within its real-life context using multiple sources of evidence [45,46]. Nottingham was selected as the case study because of its participation in two EU H2020 projects. It was recognized as a "Lighthouse City" in the REMOURBAN (REgeneration MOdel for accelerating the smart URBAN transformation) project and was a key partner in the eTEACHER Project (more details below). Primary data were collected by conducting semistructured interviews with senior and middle managers in the Nottingham City Council (NCC) and other stakeholder organisations such as Nottingham City Homes (NCH) and Nottingham Energy Partnership (NEP). All the interviewees were selected using the convenience sampling technique, as they are involved in citizen engagement and smart city projects in Nottingham. Project-related documents and deliverables were used as secondary data to support the analysis. One of the authors has been leading the delivery of eTEACHER's engagement strategy in Nottingham and, therefore, reflective practice is also used for analysis, which helps to reflect on actions for continuous learning. Table 1 presents a list of interviewees. Nottingham as a Case Study Nottingham has a distinguished position in the UK and globally when it comes to energy and decarbonization. Nottingham City Council (NCC) has set an ambition to become the first carbon neutral city in the UK by 2028 [47]. The core drivers for NCC's Energy Services team include combating fuel poverty, improving energy security through the district heating networks and solar PVs, and generating energy and cost savings. The city surpassed a carbon reduction target of 26% reduction by 2020 (as per 2005 baseline) four years early. The city has one of the largest district heating networks in the UK and a dedicated company, Enviroenergy, to manage it. NCC has an arms' length management organisation, Nottingham City Homes (NCH), managing approximately 27,000 homes. NCH has retrofitted over 5000 domestic properties with solar PVs and has an ambitious retrofitting programme for the future. Households across Nottingham have taken up over 40,000 energy saving measures delivered by NCC, including 7000 external wall insulations. Nottingham is building on its strong reputation and experience in decarbonisation to create a unique selling point for the city, leading to commercial opportunities, job creation and regeneration. It hosts two of the UKs leading universities for domestic energy research, while a good number of citizens are involved in the low carbon sector through work (over 900 businesses are classed as being in the clean technology sector), education or community groups. A Tale of Two Projects This study examines two different EU funded smart city research projects to illuminate practical lessons for citizen engagement to reduce energy consumption in low carbon smart cities. The REMOURBAN and eTEACHER case studies explore how citizens are engaged as co-creators at the community and building scale, respectively ( Table 2). Case Study 1-REMOURBAN REMOURBAN was an EU Horizon 2020 smart city demonstrator project, tackling issues at the intersection of (i) transport, (ii) energy and (iii) ICT sectors. The project was a partnership between three EU "lighthouse" cities: Nottingham (UK), Valladolid (Spain) and Eskisehir (Turkey), and two further "follower" cities: Seraing (Belgium) and Miskolc (Hungary). Each lighthouse city aimed to develop novel solutions, according to its own needs, which were then shared across the follower cities to develop generic, replicable solutions. Engaging citizens is a key feature for successful implementation and sustainability of the REMOURBAN model. The project has three areas: sustainable urban mobility, integrated infrastructure and sustainable districts and the built environment (the focus of this paper). Citizen engagement took centre stage for a Nottingham demonstration area, Sneinton, where 463 residences were retrofitted. A citizen engagement strategy was developed based on the city's past processes together with new ideas using the principles outlined in Figure 2. Engagement maturity is categorised into three levels. Level three demonstrates the ultimate intention to empower and co-create smart city solutions with citizens, devolving decision-making on one or many parts of the process to align with the top tier of Arnstein's Ladder [8]. Engagement was broken down into six key steps (Table 3). • Press releases to local media. Definition of messages REMOURBAN defines citizen engagement initiatives as processes by which public concerns, needs and values are incorporated into decisionmaking. Nottingham developed positive messages for all three levels of citizen engagement for demonstration and city area. However, there was a lack of clarity on how these messages were delivered. This may suggest that the messages were mainly developed for level 1, which is "Tokenism" on Arnstein's Ladder and therefore needs improvements to achieve more mature levels of engagement. Target audience and expected outreach The target audience were landlords of privately rented homes, commercial businesses in the demonstrator area, city-wide citizens, community groups and politicians. The demonstration area was a relatively active community and had well-established community groups. This area had a high number of privately rented homes. Tools and mechanisms A combination of online and offline citizen engagement activities was available including direct mail, one-to-one visit, community events, news channels, local newsletter, local noticeboards, community champions, social media, websites and local media, namely Notts TV, Nottingham Post and Radio Nottingham. Action plan for citizen engagement Key actions for citizen engagement in REMOURBAN included: • Stakeholder Briefing Pack, Engage the City and Sneinton, Targeted Information for demo houses and Create Marketing Collateral. Engagement maturity is categorised into three levels. Level three demonstrates the ultimate intention to empower and co-create smart city solutions with citizens, devolving decision-making on one or many parts of the process to align with the top tier of Arnstein's Ladder [8]. Engagement was broken down into six key steps (Table 3). Definition of messages REMOURBAN defines citizen engagement initiatives as processes by which public concerns, needs and values are incorporated into decision-making. Nottingham developed positive messages for all three levels of citizen engagement for demonstration and city area. However, there was a lack of clarity on how these messages were delivered. This may suggest that the messages were mainly developed for level 1, which is "Tokenism" on Arnstein's Ladder and therefore needs improvements to achieve more mature levels of engagement. Target audience and expected outreach The target audience were landlords of privately rented homes, commercial businesses in the demonstrator area, city-wide citizens, community groups and politicians. The demonstration area was a relatively active community and had well-established community groups. This area had a high number of privately rented homes. Tools and mechanisms A combination of online and offline citizen engagement activities was available including direct mail, one-to-one visit, community events, news channels, local newsletter, local noticeboards, community champions, social media, websites and local media, namely Notts TV, Nottingham Post and Radio Nottingham. Citizen engagement implementation plan for energy interventions was developed for the demonstration area. • 465 households were segmented into typology group (e.g., social and private households) to target consultation events and supporting materials to streamline the process. • Early meetings were planned to ensure that people can have their say in the development of the delivery plans. This included a set-by-step "process map", which details work programme, daily liaison control, regular local events, sign-off the completed work and customer satisfaction. Contact started early in the project, which continued throughout the design, tendering, implementation and monitoring/feedback. Description of resources Communications and marketing personnel within the NCC's energy services team led on engagement activities. 15000 GBP (British Pound Sterling) was set to be spent on the local desk (Marketing Officer in the energy services team) placement and marketing collateral in the project. Beyond the project, there was a lack of funding to effectively implement citizen engagement projects. Steps for Citizen Engagement This REMOURBAN methodology provides cities with a model for developing citizen engagement for smart city transformation. The traditional face-to-face engagement model was predominantly used, which may have limited reach to all segments of communities. However, it may be that small-scale examples can offer insights into how citizens can engage with change at a local and city scale. For example, lessons on message tailoring from REMOURBAN align with previous experiences of NCC, who can look to upscale this kind of framing: "If it is just an energy efficiency event, no one will come. Better to have a stand at existing events. In areas of deprivation it is about getting messaging right-about saving money; to be healthier; we made sure messaging was more around these predominant needs. There is opportunity once people are engaged in that message to follow up with messages about lower emissions and the city going greener". (I-1) Citizen Engagement Strategy in Nottingham The local desk in Nottingham coordinated efforts of the partners for the implementation of citizen engagement strategy. For the energy district interventions, NCC, NCH and NEP worked together to explain the REMOURBAN offer to demonstration-area households. A key aim of the citizen engagement strategy was to ensure that there was enough uptake from the demonstration area for the project to be viable. The strategy aimed to disseminate the benefits of a smart city approach through a citizen engagement framework with four areas: 1. Consult and engage. 3. Engagement throughout the operational cycle. 4. Knowledge dissemination and public outreach. The REMOURBAN citizen engagement activities were built on existing partnerships built through previous energy efficiency programmes across the city. I-2 stated that NCC attempts to encourage and foster dialogue towards empowerment, which was in line with the Citizen Power (6: Partnership) in the Arnstein's Ladder. "It is a conversation in which partners are equal and voices are respected and conditions for having that conversation are tended to, so we are aware of and create spaces in which people feel comfortable and empowered to share their ideas and concerns and also become more aware of what they are putting into. This needs to be more proactive not reactive". (I-2) A coherent citizen engagement strategy evolved through householders' feedback to identify what types of engagement would contribute most to create a legacy going beyond the project's geographical boundaries and life cycle. NEP focused on finding how to ignite the interest for domestic energy efficiency solutions of the nonparticipating private sector households living in the project area. Throughout the project, NCC sought to build long-lasting awareness and engagement with energy efficiency messages for citizens, to reduce energy and carbon emissions whilst overcoming fuel poverty. I-6 expressed a wish to go further: "We are probably not doing as much of it as we would like to, but we certainly have got those fuel poverty stats being the key thing and where we would like to target any interventions or support that we can. So, we understand areas within the city that are probably at most need". (I-6) REMOURBAN engagement was mainly top-down and there appears to have been no initial engagement with residents for the design of solutions; rather, solutions were consulted on at a later stage, which qualifies as tokenism on Arnstein's ladder. However, citizen engagement process in REMOURBAN for the 2050 homes included the involvement of citizens in procurement and contracting, which is unique. Tenants had contact with the bidding process which means the dialogue happened at that stage when the tenants had opportunity to influence the final proposals. I-3 stated "We had different events to get people engaged and tell them about plans. We had various workshops with contractors so that people could see products putting in, for example, district heating private wire", implying limited co-creation in comparison to eTEACHER. REMOURBAN beneficiaries may have a positive role to play in future engagement activities. I-2 stated that: "Much of the research was done when I got involved moving into implementation phase for which need for participative form of engagement more around how to engage citizens to understand benefits of this programme and how do we share this information, rather than how do we engage citizens to co-create something based on their inputs, which is in eTEACHER-not same degree of citizen stakeholder engagement as I know it". (I-2) However, residents did influence some design features according to I-3: "Oh they definitely helped to shape it. In Energiesprong homes, they chose colour and two entry points to homes, back and front. Their idea was to put bell in, a bell with two sounds to know if it was the back or front-something as simple as that which really made difference". Citizens in Demonstration Area Participation within the demonstration area was vital for success. The data of this area provide a snapshot of the population and its characteristics that underpin NCC's citizen engagement strategy. It had a diverse community with a sizeable immigrant population. A high number of citizens work in lower managerial, administrative and professional occupations, followed by routine and semiroutine occupations. Messaging was focused on saving money and a warmer home to improve health and well-being, with the secondary message of energy efficiency. L-8 explained "It's all about money isn't it? I mean let's be honest; it's about people's bills and people will be receptive if it's something that works for them" (I-8). A significant barrier to engagement was disruption during retrofits. However, NCH experienced that the prospect of low energy bills and better health outcomes overcame any foreseen inconvenience from the delivery of the interventions. Value addition for the house was well-received, showing how tangible benefits can facilitate better citizen engagement. In REMOURBAN, cost savings has been an important driver for most of the residents, corroborated by I-8. Landlords of Privately Rented Homes in the Demonstration Area NEP led on the engagement in the 37% of properties in the demonstration area that were privately-owned; a challenge given that the ultimate decision-makers were not the occupiers, but their landlords. I-13 stated: "We have focused on the private sector [as it] gets forgotten when it comes to energy". UK rental models are based on short-hold tenancies. The private renter is far more transient than social tenants and that of owner occupiers. NCC's messaging was aimed at incentivizing both landlords and tenants to invest in energy efficiency. Forty-nine properties signed up for the retrofit, which was only 5% of targeted households, in line with NEP's previous difficulties in signing people up on these types of schemes. It could be attributable to lack of interest and perceived cost implications. The level of deprivation in the area is high and the correlations between low income and low educational attainment were apparent. Private sector residents who owned homes may not have had the disposable income to afford the contribution. Community Groups NCC has teams of Neighbourhood Development Officers (NDOs) aiming to help build cohesive and empowered communities that have strong relationships with the Council. I-3 stated: "Before we went, there was no community within that group and engagement helped bring the community together". The REMOURBAN area had two NDOs, providing support to residents, community associations and groups across the project area, including the Renewal Trust, Sneinton Tenants Outreach Programme Community Group, Muslim Community Organisation, Friends of Green's Mill, Friends of Windmill Park, Alchemy Group and Newark Crescent Woman's Group. NDOs had regular Local Issues Meetings in six locations across Sneinton for participation from each ward. A cross-section of residents attended, enabling local government professionals to access what participants describe as the "real problem". The NDOs were a source of local information and helped project officers establish links with the wider community. The community groups were engaged to support the launch of the retrofitting offers. NCH had a tenant liaison team that recruited Energy Champions for the Sneinton area in 2018, trained by their Fuel Poverty Officer. Tools and Mechanisms The tools and mechanisms applied in the citizen engagement framework represent a shift in citizen engagement philosophy. A more accessible framework with a variety of engagement points was developed that allows greater levels of participation in accordance with individual contact preference. Tools and mechanisms that were utilized to deliver engagement included: REMOURBAN often relied on more traditional methods of communication, including one-to-one discussions. I-3 stated: "Although the community-based engagement got some success, a lot of tenants preferred 1:1 engagement", perhaps because some people felt uncomfortable leaving their homes. NEP also redeveloped their website to allow the project to have a lasting presence, with the aim of creating true sense of community for REMOURBAN beneficiaries, and to open opportunities to other interested households in Sneinton and across the city. REMOURBAN thus combined traditional in-person citizen engagement with potentially innovative ICT-based tools and social media. Resource Allocation NCC had a "Local Desk", an individual responsible for communication and engagement. A communications taskforce was developed in Nottingham whose members supported the coordination of their organisations across the city. REMOURBAN allocated 16 months of resources to the Local Desk for planning, monitoring and reporting on citizen engagement. Each partner delivering an intervention had resources to allocate from the project deliverable budget to work with target citizens. All local partners liaised with the Local Desk to ensure that activities were correctly branded and fed into the narrative of Nottingham's REMOURBAN journey. The wider perceived shortfall of government support and funding proved to be major barriers. Planned engagement activities were often delayed; I-3 stated: "Obviously there is pressure to complete in those timescales because in innovative projects, things don't go to plan". Most of the interviewees indicated that financial pressures in recent years have taken their toll on public sector energy engagement: "It's very difficult to get the funding to be able to do some projects. There are things that I really want to do, but I'm struggling to have the assets to be able to do that. I think we've just got to look out for funding". (I-9) Smart city projects have provided NCC with practical learning and their next step is to explore if these can be incorporated into future work. "Our developmental step is to feel that we have more capacity ( . . . ) around a shared understanding of how to work in shared and dialogistic way. Engagement works best when there is a continuum in which the citizens you engage will move up a notch, but you don't stop empowering or enabling them". (I-2) In addition to REMOURBAN, NCC is continuing citizen engagement journey and building on its existing work, as I-1 stated: "Conversation will still be happening. REMOURBAN will carry on in our mind as professionals, as a catalyst for doing things differently and will have legacies, but more with professionals rather than citizens". (I-1) Case Study 2-eTEACHER The eTEACHER program is an EU funded Horizon 2020 project with partner organisations in six European countries and was adopted after REMOURBAN, incorporating its learning. The eTEACHER program uses ICT solutions to encourage and enable behaviour change of building users towards energy efficiency. Nottingham hosts two pilot buildings (Table 4) and aspires to co-create solutions with their users, many of whom are also Nottingham citizens. ICT engagement began with Workshop Ask sessions which consulted building users to gather data to feed Workshop Bridge, a single meeting attended by project partners who accordingly begin to establish recommendations for app design. Co-creation was thereby employed from the project's outset, seeking users' habits and needs in the predesign stage. It was seen as critical to "to be at the start of the pipe with engagement thinking in order to build all the way down the line" (I-2), which demonstrates a forward-thinking approach and a recognition that meaningful user input can only be achieved by frequently returning to users throughout a project's entirety. Advantageously, they were able to ensure the app's development was continually catering to their dynamic circumstances to increase the chance of ultimate uptake. This dialogistic process is, of course, two-way and therefore within the topmost third of Arnstein's ladder ("degrees of citizen power"). The approach ensured that all stages of the app development were grounded in the actual, not presumed, needs of its audience. Rather than "helicoptering in smart city solutions, (we) put the citizen at the heart of the conversation, as it's their life we want the solutions to address" (I-2). This indicates a deep-rooted interest in solving localized problems in a way that is framed by the users themselves, in addition to the broader, predetermined aims of the project. By sending the message that the project was "willing to listen to what you need and try and fit that in with our design" (I-4), users were encouraged to continue participating. Combined with the dedication to an active and persistent engagement program, this open attitude, again in line with the empowerment segment of Arnstein's ladder, appeared to help grow trust and familiarity with users to increase the chance of widespread and long-term cooperation. Workshop Ask Workshop Ask asked users about their ICT practices and opinions on different kinds of eTEACHER visions. It was delivered in all 12 pilot buildings using a uniform template to generate consistently formatted results, amenable to comparable analysis. The following section concentrates on the experience of Nottingham pilots. The session collected a large volume of information within a limited timeframe, yet minimised onerousness for users by means of varied, visual and interactive tasks ( Table 5). Although basic focus groups have the advantage of logistical simplicity, weaving these conversations around practical activities enables the identification of quantitative trends for making helpful generalisations to guide design recommendations. Contextual information was regarded as very important to gauge "how the tool would fit into their building, how eTEACHER could benefit them" (I-2). In line with the "one-size-does-not-fit-all" philosophy to energy behaviour change interventions, there is clear acknowledgement of the need for hyperlocalised design tailored to the specific needs of users, born from two-way engagement, rather than a blanket solution, perhaps qualifying as tokenistic consultation on Arnstein's ladder, but falling short of empowerment. The activities' colour, tactility and mental stimulation makes for a more memorable experience and forms a positive association with the project in the mind of the user, boding well for future cooperation (Figure 3). The heterogeneity creates challenges as users in a session "don't have the same relationship with that building; they don't necessarily understand their own authority or own ability to make change (…). We'd have some users thinking "yes I can do this as a result of this", but others don't think they can do anything" (I-2). There were also concerns about the representativeness of attendees. Given that users in these buildings had to sacrifice work or break times to partake, facilitators postulated that engagement was limited to "participants who see the benefits of providing projects with feedback" (I-4). While the sessions were received well by users, arranging them in the first place was not without difficulty. The biggest issue was reportedly "trying to facilitate these sessions through a key building actor who sometimes isn't necessarily as approachable as we might want them to be" (I-4), which was overcome to a large degree by "finding a person who will champion your project and building up a trust (…) to really show what the benefit of the project is and how it could benefit them in the long run" (I-4). Figure 4 illustrates quantitative data gathered from the Workshop Ask sessions, which were supplemented by qualitative quotations from the discussions. By colour-coding participant answers according to role type, it was possible to identify user trends across building types (including other European pilots beyond Nottingham). This example shows that students across all buildings use smartphones least frequently and highlights that nearly 30% of cleaning and kitchen staff across all pilots have little access. Such information is important when designing an app that must cater to many types of people and could influence its login structure and setup. Accompanying quotations enriched the data, sometimes lending explanations to the numbers. In this case, a Council House (CH) employee described smartphone as their "life-line" and the "gateway to (their) whole life". Results Highlights Djanogly City Academy (DCA) participants particularly valued educational software. One student insisted they would be "lost without BBC Bitesize". A teacher explained DCA was the top user of "General Certificate of Secondary Education (GCSE) pod" in the UK and uses "Forum", where students contribute to communally displayed news. Many participants expressed an aversion to games. A CH employee explained "they never go near the games (because they) take up too much time". DCA students and staff mentioned previous unsuccessful attempts to introduce games in an educational context. Some activities were more open-ended in nature, yielding entirely qualitative data. These parts offered scope for more creativity on the user's side. Generating original ideas issues more potential power to users than merely commenting on pre-existing concepts, as the project is processing new material from the users as well as projecting material to them, strengthening the "two-way" dynamic. Figure 5 presents written highlights from posters from which users were asked to create and illustrate what features would make a good energy app. Throughout Workshop Ask, participants used sticker sheets in one assigned colour so answers could be traced back to their user role and demographic for trend identification. Facilitators recorded key points and quotations to capture extra insightful information. There were 10-12 participants in each session, with staff including teaching, administration, cleaning, kitchen, a councillor, and pupils in the school. The engagement lead explains that throughout eTEACHER, activities "all energised and creatively engaged the subjects even though the people undertaking those were vastly different" (I-2). The heterogeneity creates challenges as users in a session "don't have the same relationship with that building; they don't necessarily understand their own authority or own ability to make change ( . . . ). We'd have some users thinking "yes I can do this as a result of this", but others don't think they can do anything" (I-2). There were also concerns about the representativeness of attendees. Given that users in these buildings had to sacrifice work or break times to partake, facilitators postulated that engagement was limited to "participants who see the benefits of providing projects with feedback" (I-4). While the sessions were received well by users, arranging them in the first place was not without difficulty. The biggest issue was reportedly "trying to facilitate these sessions through a key building actor who sometimes isn't necessarily as approachable as we might want them to be" (I-4), which was overcome to a large degree by "finding a person who will champion your project and building up a trust ( . . . ) to really show what the benefit of the project is and how it could benefit them in the long run" (I-4). Figure 4 illustrates quantitative data gathered from the Workshop Ask sessions, which were supplemented by qualitative quotations from the discussions. By colour-coding participant answers according to role type, it was possible to identify user trends across building types (including other European pilots beyond Nottingham). This example shows that students across all buildings use smartphones least frequently and highlights that nearly 30% of cleaning and kitchen staff across all pilots have little access. Such information is important when designing an app that must cater to many types of people and could influence its login structure and setup. Accompanying quotations enriched the data, sometimes lending explanations to the numbers. In this case, a Council House (CH) employee described smartphone as their "life-line" and the "gateway to (their) whole life". Results Highlights Djanogly City Academy (DCA) participants particularly valued educational software. One student insisted they would be "lost without BBC Bitesize". A teacher explained DCA was the top user of "General Certificate of Secondary Education (GCSE) pod" in the UK and uses "Forum", where students contribute to communally displayed news. Many participants expressed an aversion to games. A CH employee explained "they never go near the games (because they) take up too much time". DCA students and staff mentioned previous unsuccessful attempts to introduce games in an educational context. Some activities were more open-ended in nature, yielding entirely qualitative data. These parts offered scope for more creativity on the user's side. Generating original ideas issues more potential power to users than merely commenting on pre-existing concepts, as the project is processing new material from the users as well as projecting material to them, strengthening the "two-way" dynamic. Figure 5 presents written highlights from posters from which users were asked to create and illustrate what features would make a good energy app. There was said to be a "very strong sense of participation from the eTEACHER project" (I-2) and "they were all happy to engage and give their honest opinion" (I-4). Having a consistently comfortable atmosphere that promotes transparency on both sides is testament to the healthy and balanced relationships fostered through the engagement sessions. Referring to the Arnstein's ladder, these project conditions seem akin to "partnership" transcending the realm of tokenism. However, one of the challenges in eTEACHER was that users may not have the same relationship of ownership within the building, which may have detrimental impact on engagement. For example, "One of the challenges is that when talking to room full of people in building, they don't have same relationship with that building. They do not necessarily understand their own authority or own ability to make chance". (I-2) Feedback Forums There was said to be a "very strong sense of participation from the eTEACHER project" (I-2) and "they were all happy to engage and give their honest opinion" (I-4). Having a consistently comfortable atmosphere that promotes transparency on both sides is testament to the healthy and balanced relationships fostered through the engagement sessions. Referring to the Arnstein's ladder, these project conditions seem akin to "partnership" transcending the realm of tokenism. However, one of the challenges in eTEACHER was that users may not have the same relationship of ownership within the building, which may have detrimental impact on engagement. For example, "One of the challenges is that when talking to room full of people in building, they don't have same relationship with that building. They do not necessarily understand their own authority or own ability to make chance". (I-2) There was said to be a "very strong sense of participation from the eTEACHER project" (I-2) and "they were all happy to engage and give their honest opinion" (I-4). Having a consistently comfortable atmosphere that promotes transparency on both sides is testament to the healthy and balanced relationships fostered through the engagement sessions. Referring to the Arnstein's ladder, these project conditions seem akin to "partnership" transcending the realm of tokenism. However, one of the challenges in eTEACHER was that users may not have the same relationship of ownership within the building, which may have detrimental impact on engagement. For example, "One of the challenges is that when talking to room full of people in building, they don't have same relationship with that building. They do not necessarily understand their own authority or own ability to make chance". (I-2) Feedback Forums Workshop Ask was the first of several engagement sessions with building users. The project "didn't want it just to be a collection of feedback and responses from users and then feed that back to developers who just go off and make something and then users see the final product" (I-4). Thus, regular meetings were arranged with a workshop or focus group format. These were named Feedback Forums (detailed in Table 6) and continually seeked the views of users, "engaging them with current project progress [so the sessions are] used as a way of them becoming a sounding board" (I-4). Feedback forums lend more evidence to eTEACHER's commitment to the balanced interchange between the project and its participants, so that app development may be fuelled by the aspirations of users all the way through to implementation. Bringing together users has proven advantageous in itself, as sessions have instigated the sharing of others' actions that others had not been previously aware of, and was therefore "good for facilitating understanding building-wide issues and ways of getting around them, even though the tool itself hasn't been rolled out" (I-4). Going beyond the project's scope, the engagement has started to lead to wider, unforeseen benefits even in early stages, possibly serving to create a community of invested building users, which in turn might strengthen the project and improve its chances of success. After initial app design, before official rollout Planned Discussion This paper explored the role of effective citizen engagement for co-creating smart city solutions and offered lessons to be learnt. In practice, it is rare to get access and insight into two related but distinct examples of citizen engagement in a single municipality. Whilst there are potential limitations with exploring one city, the two EU projects allowed for a contrasting picture to emerge. The two interlinked case studies illustrate two very different experiences of trying to climb the ladder of participation in the context of sustainable and low carbon smart cities. It is, of course, recognized that cases are limited and cautious, but the findings and lessons learnt are relevant to both smart cities and citizen engagement debate and to the wider policy challenges of minimising energy and carbon emissions in cities. The study provides clear evidence that cities can go beyond a purely transactional relationship between citizen and service providers and get close to the partnership rung on Arnstein's ladder. This is significant, as it is about enabling and encouraging the citizens to become proactive and participative members of the community, as suggested by BIS [2]. The eTEACHER project in Nottingham suggests an approach of co-creating smart solutions can work, but there are limiting and enabling factors, notably the importance of skill in terms of actually being able to facilitate engagement well and secondly the time to do it. A notable and important difference between these two case studies was the audience. REMOURBAN was attempting to engage people in their own homes using predominantly traditional and face-to-face methods. This is in line with Johnson et al. [48] who found that most of the smart cities use traditional types of citizen engagement such as citizen meetings, round tables and workshops. The eTEACHER project was operating in a more formal context that enabled a structured approach with clear boundaries and digital engagement, the key differentiator in eTEACHER project. Morton et al. [22] state that improving and widening building user engagement by involving them in co-designing interventions has potential for greater acceptance and impact. People, collaboration and governance are key components of smart city processes [21]. Both REMOURBAN and eTEACHER have the same governing structure in terms of a broad participation strategy in the city. NCC has a strapline for all of smart city line, "Smart because of You", a concept which was developed and it would be led by the intelligence of citizens, and it would be their input in terms of defining what would be important to them in their lives, understanding how they lived their lives in their neighbourhood, developing products/projects that would enhance their lives, where we were imitating and supporting a smart cities research programme-wanted to be in response to their needs. Both projects had citizens, in theory, at heart of the conversation. But commitment is not enough. REMOURBAN, as its key actors noted, did not allow sufficient time in the lifecycle of the project to actually build the relationships and enable long-term sustained engagement. Much of this had to do with the funding restrictions, but also the team relied on traditional marketing and information provision approaches as the team involved were based in communication and marketing. In REMOURBAN, citizens participated in the procurement process to carry out energy efficient retrofit of homes, as they were the beneficiaries of the procurement output. This was an opportunity for them to influence contracting and the final proposals whilst enhancing communication between the city council and citizens, as suggested by Berner et al. [49]. Hossain et al. [50] state that citizen engagement in public procurement can help develop transparency and accountability in the process along with ensuring high quality of public service delivery. In contrast, eTEACHER had written into its proposal document and a concerted programme of the workshops and engagement tools for co-creation with a whole year of sustained engagement before the ICT tools would be implemented. A dedicated person was responsible for this process and was able to be trained and upskilled in these approaches. There was an honest period of reflection from the energy team who had witnessed some of the challenges of REMOURBAN. Finally, as noted above, the context of a school and a local authority building in eTEACHER project meant that the roles and processes could be more easily managed. Arnstein's Ladder is utilised as a heuristic tool [8]. Whilst useful, there are questions on the efficacy of Arnstein's Ladder in that it does not seem to offer an appreciation of the dynamic nature of citizen engagement, especially across different audience types (household residents versus employees for example). Collins and Ison [51] argue that the form, meaning and purpose of participation is now diversified; they suggest the phenomenon of social learning rather than participation more accurately embodies the new kinds of roles, relationships and sense of purpose that will be required to progress complex, messy issues such as energy and smart city. Further research is then required in terms of how this model could be adapted and applied to a range of organisational contexts, especially ones in which delegated power and control may not be realistic or desirable. Nottingham demonstrates the practice, challenges and scale in organising digital and face-to-face citizen engagement for buildings and communities. Although there is not a one-size-fits-all approach for citizen engagement [3,19], Nottingham's experience and lessons can help facilitate citizen engagement and co-creation of low carbon smart solutions. Lessons were learnt across the two smart city projects and learning from REMOURBAN informed eTEACHER Citizen engagement activities were defined as a key determinant of REMOURBAN, yet for the reasons outlined it never quite reached its potential. Practical lessons were learnt and eTEACHER presents a more mature and to some extent better executed vision of the same aspiration. These activities ensure that the smart city model is co-created and accepted by citizens, therefore, enhancing success and replicability. Less clear is how cities empower citizens in co-creating services and products and this study offers some insights and lessons learned. Conclusions and Recommendations This paper offered evidence based on how citizen engagement can be embedded to deliver smart city activities of municipalities. Based on the case study analysis of two EU funded projects, the study argued that citizen engagement needs to be at the forefront for co-creating low carbon smart cities. Both REMOURBAN and eTEACHER projects offered novel insights into their citizen engagement activities at both building and neighbourhood scale. The projects used a variety of digital and face-to-face engagement methods to engage a demographically diverse audience, which can help support co-creation in the context of smart cities. This study offered novel contribution to literature and practice by building the evidence base of how citizen engagement can be practically embedded in smart city activities. The lessons learnt need to be shared with practitioners so that citizen engagement is beyond a tick box exercise, as Israilidis et al. [32] and Selada [52] suggest is the key to developing knowledge sharing and learning capabilities for smart city success. The study offers recommendations arising from the empirical research. Firstly, citizens are usually involved too late in the process of smart city development, mainly invited to verify requirements, designs and prototypes that have already been produced or used as sensors or data collectors. This needs to change. Firstly, an improved approach should consider citizens as active agents (actors) within the development process of smart cities. Citizens can collaborate in co-creating smart cities together with the private sector, governments and knowledge institutes following the quadruple helix approach [52]. Secondly, citizens can bring value to the table when they are part of the design and innovation process in smart cities, but this takes skilful facilitation. Thirdly, it takes time to allow innovative initiatives to develop. Finally, it requires significant personal challenge and commitment on behalf of those from a managerial point of view; this translates as proactively and routinely embedding multifaceted citizen engagement efforts in every stage of smart city projects. Therefore, engagement strategies must not sit separately from smart city project plans but be intertwined so that citizens are continuously influencing the work's direction and development. Critically, an iterative approach must be taken so that managers are open to significant changes in course according to citizen needs, even if that means deviating from the core elements stated in original plans. The transition to more sustainable energy systems has set about redefining the social roles and responsibilities of citizens [53]. For low carbon smart city research to reach its full potential, more work is needed in understanding other cities and local authorities and how different organisations respond to citizen engagement to inform behaviour change in the context of smart and sustainable cities. Further exploration of what works and what does not is required to inform future smart city practice not only in the UK and Europe, but beyond as cities globally appear to be facing common sustainability challenges. This can be carried out as part of future research agenda. The focus of the paper is energy and low carbon smart cities, but it outlines some principles of how to undertake effective engagement in other contexts. This paper concludes that local authorities and stakeholder organisations in cities should develop flexible and context-specific approaches for promoting co-creation for smart city development, rather than implementing "one-size-fits-all" approach for citizen engagement.
13,567.4
2020-12-15T00:00:00.000
[ "Engineering" ]
The urgency of intellectual property rights in the digital era from the perspective of Sharia economic law in Indonesia Intellectual Property Rights, or IPR, are essential for economic development and innovation, especially in the current digital era. However, certain aspects need to be considered in the context of Islamic financial law to ensure that IPR is applied relatively and by legal principles, especially law of Islam. This article analyzes the perspective of Islamic economic law regarding IPR in the current digital era, including its protection, use, and utilization. This study is a normative juridical study using a statute approach, which begins by investigating existing laws and regulations both in the positive legal framework and in the Islamic legal framework, which is based on the Koran, Hadith, and fatwas of the Ulama. Based on the study, it was found that even though IPR is not stated explicitly in the sharia, it refers to its essence, which is equated with property (mal), based on the norms, values, and principles contained in Islamic law, especially the mashed asy sharia theory, protection of IPR is very basic and integrated into the belief held by Muslims that Islam, which consists of monotheism, sharia, and morals, is an inseparable unity and illegal use of IPR is an injustice. The urgency of this study lies in the importance of IPR protection to encourage innovation and creativity in various fields, as well as the importance of considering Sharia principles in the Introduction Intellectual Property Rights, or IPR, is an issue of ownership rights with global implications.As one example, IPR intersects with the current development of the creative industry at the national and international levels.This innovative idea is a resource that has economic value.Therefore, it must be protected, an essential aspect of IPR.The government appeals to the public, especially creative economy actors, to be aware of the importance of IPR. Meanwhile, from the perspective of Sharia economic law, IPR should be seen using the values contained in Islam, namely by approaching Sharia principles (especially Sharia economics, which is part of Islam as a whole), which in general is intended for the benefit of the community without ignoring rights.Ownership because, in Islam, the right of property ownership in whatever form is included must be protected.These matters will be studied further to obtain understanding and information regarding the existence and protection of IPR, especially in the current digital era, both according to existing positive law and the view of Sharia economic law, which, of course, comes from Islamic law which is the raw material for National Law. Digital law and technology, commonly called information technology law, is a functional law and legal discipline that has received widespread attention in recent decades (Custers, 2022).Developing new technologies such as big data, artificial intelligence, blockchain technology, and advanced algorithms raises questions regarding regulating these technologies, such as what rights and protections citizens have or should have (Barocas 2016;La Fors 2019). Several studies on IPR have been carried out both from a positive legal perspective and an economic perspective.Therefore, this study will discuss specifically from the perspective of Sharia economic law and relate it to the development of digitalization, which 553 is currently very developed because digitalization allows the use of technology for various activities, which provides misuse in the use of IPR.This article analyzes the perspective of Islamic economic law regarding IPR in the current digital era, including its protection, use, and utilization.This study is a normative juridical study using a statute approach, which begins by investigating existing laws and regulations both in the positive legal framework and in the Islamic legal framework, which is based on the Koran, Hadith, and fatwas of the Ulama. Literature Review This study is a normative juridical study based on the statute approach, namely a statutory approach to find legal rules originating from legal materials that live and grow in Indonesia as The Living Law, especially Islamic law, which has been transformed into the national legal system so that become positive law in the field of IPR.The search will be completed by editing the legal found.Supplemented with other secondary data, analysis and interpretation will be carried out to conclude finally. Intellectual property rights, known as IPR, contain many aspects, and one of them is copyright.Copyright is a unique, special, or exclusive right given to the creator or copyright holder.This particular right means that no other person may use this right except with the permission of the creator or copyright holder concerned (Rachmadi, 2003: 86).This understanding of copyright leads to a belief in the need to understand IPR and its aspects.IPR regulates various works that arise or are created through human intelligence, such as copyrights, patents, trademarks, industrial designs, and trade secrets.IPR is not just a legal, technical issue but also concerns economic interests, and violations of IPR can not only cause losses to the state and its inventor.Still, they can also have the impact of disrupting economic relations between countries and can even lead to political tensions between countries.Intellectual property is unique among factors of production in that it is intangible (thus non-competitive) and exists only where the law defines it.This law, however, is territorial, not global.A country does not have to provide patent protection to foreign inventors (Gmeiner 2021, Jensen 2017).If clarified, IPR is the right to property that arises or is born due to human intellectual abilities.The work in question consists of science, art, literature, or technology, which are taken from sacrifices in the form of energy, time, and even costs so that the work certainly has value or provides benefits.Economics (IPR Introduction Guide, Indonesian Ministry of Industry) Meanwhile, IPR, or intellectual property in the view of Sharia economics, is an individual right that must be protected, and its use must be by sharia provisions because rights are a gift from the owner of the rights, namely Allah (M.Musyafa, 2013).Thus, the main objective of IPR is to provide optimal economic benefits to rights owners and society as a whole while ensuring distributive justice, compliance with the law, public benefit, and transparency in its use.In practice, the implementation of IPR in Sharia economics must reflect Sharia economic principles and follow a fair and ethical framework, to provide optimal economic benefits for rights owners and society at large. In the guide to introducing IPR issued by the Ministry of Trade, it is stated that IPR is beneficial not only for the business world because it protects against misuse or counterfeiting of intellectual works owned by other parties domestically or abroad, but IPR is also helpful for investors, the Government and The main thing is that it is undoubtedly beneficial for the rights holder because they receive legal protection so that they can take legal action if necessary and can grant permission to use the IPR they own.In this regard, it is essential to pay close attention to several types of IPR that are protected by law in Indonesia (Introduction Guide to IPR) with the latest legislative regulatory approach to regulate them, namely: i. Hak Cipta (UU No. 28 of 2014).Hak Cipta is the creator's exclusive right, which arises automatically based on declarative principles after a work is realized in natural form without reducing restrictions following statutory provisions.ii. Patents (UU No. 13 of 2016 amended by Law No. 11 of 2020).A patent is an exclusive right granted by the state to an inventor for the results of his invention in the field of technology for a certain period to implement the design himself or give approval to another party to implement it.Patents protect technical, new, and applicable stories in the industry.iii. Brand (according to UU No. 20 of 2016).A brand is a sign that can be displayed graphically in the form of an image, logo, name, word, letter, number, or color arrangement in 2 (two) dimensions and 3 (three) dimensions, sound, hologram, or a combination of 2 (two) ) more of these elements to differentiate goods and services produced by individuals or legal entities in goods and services trading activities.iv. Industrial Design (according to UU No. 31 of 2000).Industrial Design is the creation of shape, configuration, or composition of lines or colors or lines and colors or a combination thereof in three-dimensional or two-dimensional form, which gives an aesthetic impression and can be realized in three-dimensional or two-dimensional patterns and can be used to produce a product, goods, industrial commodities or handicrafts v. Trade Secrets (according to UU No. 30 of 2000).Trade secrets are information not known to the public in the field of technology and business, has economic value because it is helpful in business activities, and is kept confidential by the owner of the trade secret.vi. Integrated Circuit Layout Design abbreviated as DTLST (according to UU No. 32 of 2000).Circuit layout design is divided into integrated circuits, namely a product in finished or semi-finished form that contains various elements, and at least one of these elements is an active element that is partially or wholly interconnected and formed in an integrated manner in the intended semiconductor material.To produce electronic functions.Meanwhile, layout design is a creation in the form of a three-dimensional layout design of various elements, at least one of which is an active element, as well as all or all of the interconnections in an integrated circuit, and the three-dimensional layout is intended to prepare for the creation of an integrated circuit.In connection with Sharia economic law, it is necessary to explain that several fundamental principles of Sharia economics must be taken into account in the context of protecting IPR, such as the principle of no absolute ownership of something; all existing resources are a gift from Allah, moving the economy in a congregation and guaranteeing the right of Society and its planning for the benefit of many people.(This issue must be deepened with references and linked to maqashid ash sharia).Verses from the Koran and Hadith that can be conveyed to support these principles include QS.Al-Hujurat: 13 expresses the importance of knowing each other and that a person's superiority before Allah is determined by his holiness.The hadith of obeying the law states that Allah likes people who fulfill their promises and can be trusted in their business affairs (HR.Abu Daud).Likewise, the principle of public benefit emphasizes the importance of paying attention to the public interest in every action.This principle demands that the use of intellectual work be carried out in a halal manner and provide benefits to Society.Verses related to the direction of public benefit can be found in the QS.Al-Baqarah: 195 emphasizes that we should not fall into loss and always do good because Allah loves those who do good. In providing IPR protection, several obstacles must be considered, namely gaps in understanding and awareness of the importance of IPR.This happens because many still need to understand the importance of IPR protection to encourage innovation and creativity in various fields.Apart from that, because IPR is an aspect of security that is adhered to by all countries in the world, while not all countries have a strong and effective IPR protection system, it is necessary to collaborate between countries, especially Muslim countries, in protecting and utilizing IPR.These obstacles need to be anticipated, especially now when digitalization is so massive and has an impact on all dimensions of life, including the economic sector, whose reach touches various sizes of life, such as the banking world, the business world, and others (Setyaningsih Sri Utami, 2010).Specifically, the influence of digitalization is enormous for the economy, namely the ease of marketing products, the existence of new services in carrying out buying and selling transactions, and an increase in productivity.Still, these benefits are also accompanied by negative impacts, such as the opportunity for illegal transactions to occur (Bakti, 2018 ) and even the possibility of violating the established IPR protection.This is possible because, according to Gartner, digitalization is generally switching from analog to digital in the business realm to obtain new sources of income and business opportunities.Also, Dr. J Scott Brenner and Daniel Kreiss say that digitalization tends to restructure digital communication and media infrastructure in various elements of human life, which can change human interactions (DTI, 2023). Changes in human interaction in the current millennial era are felt because almost all activities and activities are supported and interact with digital systems.For example, online markets (e-commerce), online tickets, digital books, and so on, including the field of Copyright (as one part of IPR) is the activity of enjoying music, songs, and films.These things make it easy for creators to introduce and disseminate their creations.Still, other aspects harm their creative works, and this was conveyed by Razilu (Acting Director General of KI), who stated that modern and sophisticated technology makes it easier for people to do so-piracy of copyrighted music and songs.Prof. Ramli said that currently, consumers can directly access thousands or even hundreds of thousands of songs even though they do not own the Copyright (Fitri Novia Hariani, 2022), while on the other hand, the reproduction of copyrighted works is for educational purposes, research, as long as it is non-commercial and for evidentiary purposes in court.It is considered not a copyright violation (Danrivanto, 2007).Substantially, this seems like there is a difference in treatment, but whatever it is, this shows the need for further scrutiny regarding the existence of the Copyright Law (UUHC) as one part of IPR because the digital era has made it possible for all of these things to happen, even WIPO (World Intellectual Property Organization) in its conference in Geneva which was attended by 160 countries in December 1996, captured the importance of environmental changes to protect copyrighted works, namely by the issuance of two international conventions, namely the WIPO Internet Treaties, namely the WIPO Copyright Treaty (WCT) and the WIPO Performance and Phonogram Treaty ( WPPT) which are two products to respond to developments in the digital environment (Khwarizmi Maulana Simatupang, 2021). Another thing related to the importance of IPR in this era of digitalization is protection for business actors because trademarks can easily be misused or stolen through cybersquatting techniques, namely by third parties registering trademark domain names to make money or damage the brand's reputation, including the emergence of e-platforms.-Commerce has led to high levels of product counterfeiting and the sale of counterfeit goods, which hurts customers and brand owners.However, the digital era also offers IPR protection patterns, namely systems that can monitor and track using a blockchain system that can develop payment patterns automatically (Silvana Juliant, 2023).For example, this example shows that it is essential to protect brands or copyrighted works or even other forms of IPR, and this has been anticipated by the issuance of several statutory regulations regarding this matter in line with what is stated in the Declaration of Human Rights Article 27 (2) that every person has the right to obtain protection for moral and material interests obtained as a result of a scientific, literary or artistic production created by him (Universal Declaration of Human Rights). What is included in the Declaration of Human Rights indicates that protection of IPR is a recognition of the protection that needs to be given to humans who have rights as a fundamental and universal value that is owned by humans and are human and are vulnerable to being taken/controlled/abused by other humans.So, it needs to be protected.In line with this, it is necessary to study the view of Islamic law, especially in this case, sharia economic law, regarding IPR because apart from being a human right, IPR also has financial content, which must be protected.After all, it has the potential to be taken over or misused. Islam is the religion of rahmatan lil alamin (Farah Ramadanti, 2023), meaning that Islam, whose presence amid people's lives can create peace and compassion for humans and the universe, and is also defined as a value that does not justify discrimination due to differences in religion, ethnicity, race, and nation.Islam does not prohibit someone from being creative or channeling ideas to fulfill needs to obtain the highest benefit and utilization (Idri, 2021).These values imply that Islam has universal, global, and comprehensive values for everyone.These values are reflected in the Al-Quran surah Ar Rum verse 22, which means: "and among the signs of His power is the creation of the heavens and earth and the diversity of your languages and the color of your skin.Indeed, in that there are true signs for those who know."In line with this, from the perspective of sharia economic law, which is based on Islamic values, talking about IPR requires knowing first how Islam views the issue of rights.Hasbi Ash Shidiqiy stated that rights in Islam are divided into two, namely, in a specific and general sense.In a specific sense, rights are basic rules that must be obeyed in human relations, whether regarding people or property.In contrast, rights are generally defined as Sharia provisions to determine a power or legal burden (Hasbi as Shidiqiy, 1999).Meanwhile, property in Islam is defined as control over something or something owned and allows one to act legally on it by buying and selling, renting, waqf, or lending it to other people (Nasroen et al., 2007).This gives birth to a fundamental understanding that Islam does not recognize absolute mastery in ownership, especially in knowledge, or in other words, mastering and protecting knowledge so that other people do not know about it.However, on the contrary, Islam encourages people to spread knowledge and share it because, in essence, everything belongs to them.Allah as hinted at in the Koran, Surah Al Baqarah verse 189, the translation of which reads: "To Allah belongs the Kingdom of the heavens and the earth, and Allah is Almighty over all things."Thus, everything in this universe has the owner of Allah, and humans were created in a weak and ignorant state and had no ability or strength.However, Allah gave reason and reason and thus the ability of humans to create and discover.Something is also based on the will and grace of Allah's love.Therefore, IPR, as a right to human abilities, cannot be protected and shared for the benefit of humanity.On the other hand, people cannot arbitrarily use IPR owned by others without the owner's permission because it is a right that God gave to someone.That is why protection is needed.IPR is referred to in Islam as Iftikhar, which is a right of economic value (the right to obtain economic benefits) and a moral right (a right inherent in the creator) that cannot be taken for granted (Ade Hidayat, 2014).Even though in the context of Islamic law, IPR is a new issue that is not discussed in particular terminology, it is still being debated and studied in its existence by ulama (Mufliha Wijayati, 2014). The discussion above suggests that the discussion of IPR or intellectual property in Islamic law is more directed towards discussing the concept of property ownership so that its use is also linked to the wise use of other people's property.In the Koran, this is widely regulated.The main problems in the economy are injustice and distribution.Injustice causes the production process not to be optimal, thus hampering production increases, giving rise to a feeling of not belonging to one another (self-belonging), thereby reducing society's work ethic in general (Idri, 2021).One of the guidelines is what is contained in the Al-Quran Surah An Nisa verse 29, the translation of which reads: "O you who believe, do not falsely devour each other's wealth, except using commerce that is carried out mutually between you, and do not kill yourselves."Indeed, Allah is most merciful to you," Several verses of the Koran that regulate the issue of utilizing ownership rights are reinforced by several hadiths that suggest the same thing; for example, the Messenger of Allah said, "Indeed, your blood (soul) and property are harams (noble and protected)."This all illustrates how firmly Islam provides protection regarding a person's ownership rights to their property, and property is not only a matter of material assets but also applies to their usefulness.This is even clearer if it is referred to the theory of maqashid ash sharia, namely that Islamic sharia aims for human benefit by maintaining religion, soul, reason, honor, and property.Based on all this, the Indonesian Ulema Council (MUI) issued a fatwa prohibiting violations of other people's intellectual property rights (Mufliha Wijayati, 2014). Fatwa of the Indonesian Ulema Council Number 1/MUNAS VII/MUI/5/2005 concerning the Protection of Intellectual Property Rights (HKI) which was issued at the VII MUI National Conference on 19-22 July 2005 with the consideration that IPR violations had reached an alarming level.There was a submission request from the Anti-Counterfeiting Society (MIAP) for the MUI to issue a fatwa regarding the status of Islamic law regarding IPR.The MUI fatwa is not only guided by the Al-Quran and hadith but also many ulama's opinions, thus determining that what is included in the IPR category are plant varieties, trade secret rights, industrial design rights, integrated layout design rights, patents, brand rights, copyright.In Islamic Law, IPR is seen as one of the Fuqua Maliyah (property rights) which receives legal protection (mashun) as well as mal (wealth) and every form of violation of IPR includes and is not limited to using, disclosing, making, using, selling, importing, Exporting, circulating, handing over, providing, announcing, reproducing, plagiarizing, counterfeiting, pirating other people's IPR without right is an injustice and is haram.The MUI fatwa (MUI, 2005) clearly prohibits these actions.In fact, in one of the online law articles, it is stated that this MUI fatwa is more "harsh" than positive law because those who are judged are those who copy and distribute it and those who use it.Apart from that, what is also interesting is that IPR can be used as the object of a contract and can be donated.Or inherited (Zae, 2005). The MUI fatwa does not have binding power as per statutory regulations.However, the MUI fatwa provides moral strength and a solid religious foundation for most Muslim business people and consumers.Especially in the current era, which is called the digital era, where plagiarism, counterfeiting, halal use, and exploitation of IPR have the potential to occur, enforcing statutory regulations is very important, and the strength of the moral and religious foundations will certainly strengthen existing statutory provisions.The current development of digitalization cannot be avoided and has even penetrated the religious sector, so there is the term religious digitalization.This is because many platforms can facilitate communication and interaction between Muslims (Bowo Pribadi, 2023) by disseminating religious knowledge and increasing the number of books/scriptures and other information.However, it needs to be realized that if this is not monitored correctly, it will disrupt the sacredness of values.Religion and of course, the disruption of protection for the creative works of previous scholars. Conclusion In the era of globalization and technological progress, intellectual property rights (IPR) significantly advance the economy.They can also be a potential to be monitored concerning illegal or illegal use of other people's IPR.For this reason, creating a mechanism to enforce existing provisions to provide maximum protection to IPR owners is necessary.In Islam, IPR is wealth like property (mall), which must be protected, and is haram (an injustice) if its benefits are taken without rights.Efforts to increase understanding and awareness of the importance of IPR must also be accompanied by developing an IPR protection system that is by Sharia principles and the need to increase cooperation between Muslim countries in protecting and utilizing IPR Global.
5,437.6
2023-12-11T00:00:00.000
[ "Law", "Economics" ]
Moisture Behavior of Pharmaceutical Powder during the Tableting Process The moisture content of pharmaceutical powder is a key parameter contributing to tablet sticking during the tableting process. This study investigates powder moisture behavior during the compaction phase of the tableting process. Finite element analysis software COMSOL Multiphysics® 5.6 was used to simulate the compaction microcrystalline cellulose (VIVAPUR PH101) powder and predict temperature and moisture content distributions, as well as their evolution over time, during a single compaction. To validate the simulation, a near-infrared sensor and a thermal infrared camera were used to measure tablet surface temperature and surface moisture, respectively, just after ejection. The partial least squares regression (PLS) method was used to predict the surface moisture content of the ejected tablet. Thermal infrared camera images of the ejected tablet showed powder bed temperature increasing during compaction and a gradual rise in tablet temperature along with tableting runs. Simulation results showed that moisture evaporate from the compacted powder bed to the surrounding environment. The predicted surface moisture content of ejected tablets after compaction was higher compared to that of loose powder and decreased gradually as tableting runs increased. These observations suggest that the moisture evaporating from the powder bed accumulates at the interface between the punch and tablet surface. Evaporated water molecules can be physiosorbed on the punch surface and cause a capillary condensation locally at the punch and tablet interface during dwell time. Locally formed capillary bridge may induce a capillary force between tablet surface particles and the punch surface and cause the sticking. Introduction Tableting is the mechanical process adopted by the pharmaceutical industry to produce medicinal tablets. It involves direct compression of a powder mixture in a die using three steps: die filling, during which the formulation is delivered to the die cavity; compaction, during which pressure is applied to the formulation; and ejection, when the compacted tablet is ejected from the die cavity [1]. Up to 1 million tablets can be pressed per hour with one multi-die rotating press. The formulation compacted during the process is a mixture of active principal ingredients and excipients, such as ligands, binders, and lubricants, with a given moisture content. The study of the physics of the tableting process is of great interest in order to understand the mechanism of consolidation of the granules during compaction, the punchsticking phenomena, and the quality management of the pressed product. Several mathematical equations, such as Hekel's [2], Kawakita's [3], and Leuenberger's [4] equations, have been developed to investigate the ability of formulations to flow, deform, and consolidate under pressure. These equations are usually used to predict how a formulation performs during the tableting process and minimize defects related to the pressing process. fatty acids, amides, polymers, and water [45,46]. In the NIR region, (1) bands arising from overtones and combinations of O-H, N-H and C-H vibrations appear strongly due to their larger anharmonicity, and (2) large band shifts are induced by the formation of hydrogen bonds and hydration. Within the NIR wavelength range, i.e., 700 to 2500 nm, prominent absorption bands of liquid water are found at 760, 970, 1190, 1450, and 1940 nm [47,48]. These absorption bands are due to the second overtone of the OH stretching band (3ν1, 3), the combination of the first overtone of the O-H stretching band and OH-bending band (2ν1,3 + ν2), the first overtone of the OH-stretching band (2ν1, 3), and the combination of the OH-stretching band and O-H bending band (2ν1,3 + ν2) [49]. Hence, NIR spectroscopy has been used to measure the moisture content of pharmaceutical tablets and powders during granulation, drying, and the tableting process [46,50,51]. The quantification of moisture content is often based on an experimental calibration model obtained with partial least squares (PLS) regression, multiple linear regression, or principal component regression. The calibration model thus established is then used to predict the inline or offline moisture content of powder or tablets during such pharmaceutical processes as wet granulation or tableting [52][53][54]. This study aimed to investigate phenomenologically the buildup of the sticking phenomenon during the compaction of pharmaceutical powder through the simulation of powder moisture behavior under compression. The AFEM method, based on the modified DPC model, was used to investigate moisture transport and thermomechanical behavior of powder during the tableting process, particularly during the compaction phase. The evolution and distribution of powder bed density, temperature, and moisture changes during the compaction process were studied. We hypothesized that the migration of powder moisture toward compression tools due to temperature and local humidity gradients contributes to the development of sticking. The results of this study show how thermomechanical phenomena can lead to moisture migration and potentially further induce sticking through capillary forces. To validate the simulation results, the temperature of the tablet surface and sides and the surface moisture content of the tablet a second after ejection were measured using PAT tools, such as a thermal infrared camera and NIR sensor. The results of this study provide new and original insights into the phenomenology of the sticking defect observed during the tableting process. Materials Microcrystalline cellulose (MCC) grade VIVAPUR PH101 (JRS Pharma, LP., Patterson, New York, NY, USA) was used as the model powder material. The bulk density of the loose powder was measured with a 100 mL beaker and was found to be 279 ± 2.5 kg/m 3 . The true density of the dry, solid particles of MCC ranges between 1512 and 1668 kg/m 3 [55], and hence a value of 1570 kg/m 3 [20] was adopted in this study. The powder was stored at a controlled humidity level of 33.5 ± 0.1% and a temperature of 21.1 ± 0.5 • C. The water content of the powder was measured with an analytical transmitter (Mettler M100, Mettler-Toledo Ltd., Leicester, UK), which dries the powder with an infrared source and calculates the water content by mass difference. The mean of three measurements was calculated, and the value obtained for water content was 4.5 ± 0.3%. Water content density of the powder in the die before compaction was obtained by the following formula: where ρ app_w is the apparent moisture content density (kg/m 3 ), ρ app is the apparent powder density in the die before compaction and w is the moisture content (%) in the powder before compaction. With a powder mass of 500 mg at the maximum capacity of the compression die (height = 12 mm, diameter = 11 mm), the apparent density of the powder before compaction was 438.6 kg/m 3 . The mean value of apparent moisture density obtained from the three measurements of moisture content was 20.85 ± 1.2% kg/m 3 . The heat capacity (Cp) of the powder was determined with a differential scanning calorimetry apparatus NETZSCH DSC 404F3 (NETZSCH-Gerätebau GmbH, Wittelsbacherstraße 42, Selb, Germany) with platinum as the reference material. The test powder sample weighed 23.7 mg, and the measurements were taken under a N 2 atmosphere with the temperature ranging from 29 • C to 130 • C and a heating rate of 5 • C/min. The heat capacity in this study was constant during powder compression. Thermal Diffusivity and Conductivity The thermal diffusivity of MCC PH101 powder compacted at 200 MPa pressure with a 12-ton manual hydraulic press (Carver) was measured with a laser flash analysis apparatus NETZSCH LFA 457 (NETZSCH-Gerätebau GmbH, Wittelsbacherstraße 42, Selb, Germany). The test sample thickness was 1.71 mm and the diameter was 25 mm. Measurements were taken under a N 2 atmosphere at a temperature ranging from 29 • C to 130 • C and a heating rate of 5 • C/min. A neodymium glass laser with a wavelength of 1054 nm was used with an energy of 10 J per pulse each 0.3 ms and a voltage of 2978 V. The measurement principle is illustrated on Figure 1. The front surface of the plane-parallel sample was heated by a laser pulse, and the resulting temperature increase at the sample's rear face was recorded as a function of time. In a one-dimensional heat flow, thermal diffusivity was calculated with this temperature rise as follows [56]: where α was thermal diffusivity in m 2 /s, d was sample thickness in m, and t 0.5 was the half-time (time value at half-signal height) in seconds. Pharmaceutics 2023, 15, x FOR PEER REVIEW 4 of 34 powder before compaction was 438.6 kg/m 3 . The mean value of apparent moisture density obtained from the three measurements of moisture content was 20.85 ± 1.2% kg/m 3 . Heat Capacity of the Material The heat capacity (Cp) of the powder was determined with a differential scanning calorimetry apparatus NETZSCH DSC 404F3 (NETZSCH-Gerätebau GmbH, Wittelsbacherstraße 42, Selb, Germany) with platinum as the reference material. The test powder sample weighed 23.7 mg, and the measurements were taken under a N2 atmosphere with the temperature ranging from 29 °C to 130 °C and a heating rate of 5 °C/min. The heat capacity in this study was constant during powder compression. Thermal Diffusivity and Conductivity The thermal diffusivity of MCC PH101 powder compacted at 200 MPa pressure with a 12-ton manual hydraulic press (Carver) was measured with a laser flash analysis apparatus NETZSCH LFA 457 (NETZSCH-Gerätebau GmbH, Wittelsbacherstraße 42, Selb, Germany). The test sample thickness was 1.71 mm and the diameter was 25 mm. Measurements were taken under a N2 atmosphere at a temperature ranging from 29 °C to 130 °C and a heating rate of 5 °C/min. A neodymium glass laser with a wavelength of 1054 nm was used with an energy of 10 J per pulse each 0.3 ms and a voltage of 2978 V. The measurement principle is illustrated on Figure 1. The front surface of the plane-parallel sample was heated by a laser pulse, and the resulting temperature increase at the sample's rear face was recorded as a function of time. In a one-dimensional heat flow, thermal diffusivity was calculated with this temperature rise as follows [56]: where α was thermal diffusivity in m 2 /s, d was sample thickness in m, and t0.5 was the halftime (time value at half-signal height) in seconds. From the measured thermal diffusivity and heat capacity, thermal conductivity (k) was calculated with the following equation: where λ was thermal conductivity (W/(m·K)), α was thermal diffusivity (m 2 /s), app was the apparent density of the tablet at maximum compression (kg/m 3 ), and Cp was the heat capacity (J/kg·K) of the powder. The thermal conductivity die tool was obtained from COMSOL Multiphysics ® 5.6 software (provided by CMC Microssytem, Pavillon 1, 3000 From the measured thermal diffusivity and heat capacity, thermal conductivity (k) was calculated with the following equation: where λ was thermal conductivity (W/(m·K)), α was thermal diffusivity (m 2 /s), ρ app was the apparent density of the tablet at maximum compression (kg/m 3 ), and C p was the heat capacity (J/kg·K) of the powder. The thermal conductivity die tool was obtained from COMSOL Multiphysics ® 5.6 software (provided by CMC Microssytem, Pavillon 1, 3000 boul. de l'Université, Sherbrooke, QC, J1K 0A5, Canada) [57]. Like heat capacity, the thermal conductivity of the punches and die is a function of the local temperature during powder compression. Powder Retention/Sorption Curve The isothermal moisture sorption or retention curve of MCC PH101 was obtained by stocking 30 g of powder overnight under 25 ± 2%, 35 ± 2%, 50 ± 2%, 65 ± 2%, 80 ± 2%, and 97 ± 2% relative humidity in a hermetically sealed box. The powder was collected after 12 h, when powder humidity was assumed to have reached an equilibrium state with the surroundings. Powder moisture content was measured one second after it was collected from the box. Three replicates of the powder sample were stored at each relative humidity. Relative Tablet Density The relative density of the tablet was calculated with the dimensions of the out-of-die tablet at different rates of axial upper punch pressure: where ρ tablet was tablet density (kg/m 3 ), ρ particle was the true powder particle density (kg/m 3 ), and m tablet and V tablet were the tablet weight (kg) and volume (m 3 ), respectively. Tableting Process The powder was compressed with a single-station tablet press (Manesty F3, Federal Equipment Company, Cleveland, OH, USA). The press was operated at a compression speed of 32 mm/s, measured with a proximity sensor (NPN 10 mm, Hall effect 3 wires normally open, 5-24VCC, 200 mA, 320 kHz) coupled with a tachymeter. Compaction pressure was measured with a calibrated load cell (MLC-10K, Transducer Techniques, Temecula, CA, USA) fixed in a loge at the top of the upper punch. The set of punches and die (Natoli Engineering Company, Inc., St. Charles, MI, USA) used were made of ERS S7 steel and A7 steel, respectively, with a diameter of 10.97 ± 0.01 mm and an inner die diameter of 11 ± 0.01 mm. The maximum compression force of the tablet press was set at 410 MPa. The radial pressure developed on the die wall during compaction was measured indirectly with a linear strain gage (Micro Measurements, A VPG Brand, Raleigh, NC, USA) with a resistance of 350 ± 0.3% ohm and a gage factor of 2.09. The strain gage was fixed at the lower middle of the exterior wall of the die. Radial pressure was calculated from the Hoop stress developed on the exterior surface of the die, which was obtained through the measured strain and Young's modulus of the die material. The data were acquired with a logger (DI-718B-US, DataQ Instruments, Akron, OH, USA) coupled with amplification modules (8B38) for strain gage and load cell. The sensors' output was processed with free Windaq software in Windows 10. Tablet Surface and Peripheral During compression, the temperature of the top and sides of the tablet was measured one second after its ejection from the die cavity. An FLIR E4 thermal camera (Teledyne FLIR LLC, Wilsonville, OR, USA) with automatic adjustable emissivity was used to detect tablet temperature. The FLIR was manually held to focalize the pointer on the region of interest. The distance between the FLIR and the ejected tablet was approximatively 20 cm, which allowed more precise recording of the temperature, measured for different levels of compaction, i.e., relative densities, for validation with the simulation results. For each compaction level, an average temperature was calculated from 10 samples. NIR-Penetration Depth To estimate the NIR-radiation penetration depth in the tablet during scanning, an MCC disk and regular paper were placed at the top of an acetaminophen (N-acetyl-paraaminophenol or APAP) tablet. The regular paper disks were used because of their chemical similarity with MCCs and to simulate thinner thickness. All the samples had the same diameter-20 mm. The regular paper has a thickness of 74 microns and porosity of 57%. The MCC disk was obtained from pressed MCC powder at a pressure of 220 MPa and had a thickness of 0.45 mm and a porosity of 38%. The porosity (ε) of the paper and MCC disk was estimated from Equation (4) (ε = 1 − RD calc ). APAP powder was supplied by Sigma-Aldrich Canada Co. (2149 Winston Park Drive Oakville, ON L6H 6J8, Canada). Figure 2 shows the MCC disk ( Figure 2a) and one ( Figure 2b) and five paper sheets (Figure 2c) at the top of the APAP tablet. NIR-Penetration Depth To estimate the NIR-radiation penetration depth in the tablet during scanning, an MCC disk and regular paper were placed at the top of an acetaminophen (N-acetyl-paraaminophenol or APAP) tablet. The regular paper disks were used because of their chemical similarity with MCCs and to simulate thinner thickness. All the samples had the same diameter-20 mm. The regular paper has a thickness of 74 microns and porosity of 57%. The MCC disk was obtained from pressed MCC powder at a pressure of 220 MPa and had a thickness of 0.45 mm and a porosity of 38%. The porosity (ε) of the paper and MCC disk was estimated from equation 4 (ε = 1 − RDcalc). APAP powder was supplied by Sigma-Aldrich Canada Co. (2149 Winston Park Drive Oakville, ON L6H 6J8, Canada). Primarily, the APAP tablet, MCC disk, and regular paper were scanned individually to register their respective spectra. Then, disks of increasing thickness were stacked at the top of the APAP tablet until the corresponding fingerprint (absorbance from functional group in a specific region) of acetaminophen vanished completely. The thickness at which APAP was not detected was chosen as the maximum penetration depth of NIR radiation. The regular paper was used at first to find the maximum penetration depth and replaced by the MCC disks to verify the results. The setup for the measurement was maintained to avoid variation in the height at which the sample was scanned. The humidity and temperature in the lab were also controlled. Determination of Relative Tablet Moisture Content Density The tablet surface moisture was measured with an NIR sensor in reflection mode at a wavelength of 1100-1700 nm. A calibration model was developed with manually compacted tablets. Then, the tablets pressed with the Manesty single punch were scanned inline to predict the tablet surface moisture content. To create the calibration model, the powder was spread in an aluminum foil container and equilibrated overnight at 25 ± 2%., 35 ± 2%, 50 ± 2%, 65 ± 2%, 80 ± 2%, and 97 ± 2% relative humidity and room temperature in a humidity-controlled chamber; 30 g of the powder was used for each relative humidity level. After 12 h, the powder was removed from the humidity-controlled chamber, poured into a small, closed plastic box, and stirred Primarily, the APAP tablet, MCC disk, and regular paper were scanned individually to register their respective spectra. Then, disks of increasing thickness were stacked at the top of the APAP tablet until the corresponding fingerprint (absorbance from functional group in a specific region) of acetaminophen vanished completely. The thickness at which APAP was not detected was chosen as the maximum penetration depth of NIR radiation. The regular paper was used at first to find the maximum penetration depth and replaced by the MCC disks to verify the results. The setup for the measurement was maintained to avoid variation in the height at which the sample was scanned. The humidity and temperature in the lab were also controlled. Determination of Relative Tablet Moisture Content Density The tablet surface moisture was measured with an NIR sensor in reflection mode at a wavelength of 1100-1700 nm. A calibration model was developed with manually compacted tablets. Then, the tablets pressed with the Manesty single punch were scanned inline to predict the tablet surface moisture content. To create the calibration model, the powder was spread in an aluminum foil container and equilibrated overnight at 25 ± 2%, 35 ± 2%, 50 ± 2%, 65 ± 2%, 80 ± 2%, and 97 ± 2% relative humidity and room temperature in a humidity-controlled chamber; 30 g of the powder was used for each relative humidity level. After 12 h, the powder was removed from the humidity-controlled chamber, poured into a small, closed plastic box, and stirred manually for 30 s. Powder (3 g) was then sampled from the plastic box for water content measurement, and the rest of the batch was used to manually produce 5 tablets with a lower compaction level (thickness: 6.5 ± 0.8 mm), each weighing 0.5 g. Each tablet was individually scanned five times (25 scans for each relative humidity level). Scans of 30 tablets in total were used for the calibration. The tablets were made in an 11 mm die and pressed with a punch by hand to apply very low pressure, since high pressure can induce a Pharmaceutics 2023, 15, 1652 7 of 33 rise in powder particle temperature. All steps in the scan collection phase were performed on the compression plate to mimic the conditions during the tableting process. The measured water content from the conditioned powder used to make the tablets manually was chosen as a reference value of these tablets' surface moisture content. The moisture content of the manually produced tablet was considered to be homogeneous throughout the tablet and equal to the moisture content of powder from which it was made. The time between powder sampling from the plastic box and tablet manual compression followed by the scan with NIR sensor was less than 5 s, and hence the moisture content was expected to be preserved. The powder moisture measurement and the tablet manual compression were performed simultaneously. For the calibration model, the moisture content was selected to range between 4.2% and 14%. This range of moisture content was chosen to build a robust model that could predict a high moisture content with minimum error. In fact, the temperature of the tablet was shown to increase during the compression; therefore, it induced the migration of water molecules in the powder bed to the external surface of the tablet. The raw spectra collected from the tablet presented noise due to interference and baseline shift due to multiplicative and additive effects caused by the inhomogeneity of the tablet microstructure. All spectra ( Figure 3a) are plotted versus the spectral mean, as shown in Figure 3b. The red line represents the diagonal introduced for comparison, the slope of the lines (formed by scatter points) and the intersection with the vertical axis (absorbance) represent the multiplicative and additive effects [58,59]. The lines that have the same slope and same intersection (with the absorbance axis) with the diagonal red have has insignificant multiplicative and additive effects, respectively. in Figure 3b, most lines (scatter points) have the same slope as the diagonal and a non-zero intersection with the absorbance axis. This suggests that some spectra have been affected by the multiplicative effect and most of the spectra are affected by the additive effect. To remove these artefacts, different mathematical preprocess functions, such as multiplicative scatter correction (MSC) and its extended version (EMSC), standard normal variance (SNV), Savitzky-Golay (SG) filter, detrend, and first and second derivative, can be used to treat the spectra [60]. To select the most appropriate preprocessing method for the present data, the pretreated spectra were plotted against the mean spectra and compared with the diagonal line. When the multiplicative and additive effects are removed from the raw spectra, the plotted lines (line formed by scatter points) will align with the diagonal line. The preprocess technic that allows the plotted lines to fit best with the diagonal line was selected as the best-performing technic. In this study, the combination of EMSC and SG (17, 1, 0) was selected, and the results are shown in Figure 3c. The pretreated spectra are shown in Figure 3d. After pretreatment, Hotelling's T 2 in principal component analysis (PCA) [61] with a 95% confidence interval was applied to the data to visualize them and remove outliers. The PLS regression method [62] was used to establish the calibration model with an optimal three principal components. K-fold cross-validation was performed on the data. K-fold cross-validation is an external validation or prediction technic that divides the data into six groups, with each group containing the data of each moisture content level. A calibration model is established with five groups systematically selected and used to predict the sixth group. The process is repeated in a loop until all the groups are predicted by the respective calibration models. At the end of the process, the model performance [54] in calibration and prediction was measured with the regression coefficient (R 2 ) and root-meansquare error (RMSE) [54]. A mean R 2 and RMSE in calibration and prediction is calculated from the six results. After the calibration model had been computed, the tablets were pressed with compression pressure of 37.5 MPa (thickness of 4.83 ± 0.2 mm) and scanned directly on the tablet press table. Figure 4 shows the steps in the process of inline scan collection on the press. First, the pressed and ejected tablet from the die was pushed by the scraper (Figure 4a) and then the tablet was manually guided (Figure 4b) under the sensor to take (Figure 4c). The entire process took less than 2 s; therefore, the tablet surface moisture was assumed to have been preserved. After pretreatment, Hotelling's T 2 in principal component analysis (PCA) [61] with a 95% confidence interval was applied to the data to visualize them and remove outliers. The PLS regression method [62] was used to establish the calibration model with an optimal three principal components. K-fold cross-validation was performed on the data. Kfold cross-validation is an external validation or prediction technic that divides the data into six groups, with each group containing the data of each moisture content level. A calibration model is established with five groups systematically selected and used to predict the sixth group. The process is repeated in a loop until all the groups are predicted by the respective calibration models. At the end of the process, the model performance [54] in calibration and prediction was measured with the regression coefficient (R 2 ) and rootmean-square error (RMSE) [54]. A mean R 2 and RMSE in calibration and prediction is calculated from the six results. After the calibration model had been computed, the tablets were pressed with compression pressure of 37.5 MPa (thickness of 4.83 ± 0.2 mm) and scanned directly on the tablet press table. Figure 4 shows the steps in the process of inline scan collection on the press. First, the pressed and ejected tablet from the die was pushed by the scraper ( Figure 4a) and then the tablet was manually guided (Figure 4b) under the sensor to take measurements ( Figure 4c). The entire process took less than 2 s; therefore, the tablet surface moisture was assumed to have been preserved. Tablet Cohesion and Internal Friction Angle Density Tablet cohesion and internal friction angle were determined with diametral and axial compression tests. The tablet's diametral and axial breaking forces were measured with a tablet hardness tester (C50, Whitehall International, New Lane, Havant, UK) and a manual press (Carver Laboratory Press, model 12 Ton, Fred S. Carver Inc., Wabash, IN, USA). Tablets compacted at different relative densities between 40% and 95% were used for the measurements. Tablets with a relative density < 40% were not hard enough for the tests. The hardness tester crushed the tablets along their diametral/radial line and displayed the maximum breaking diametral force (F d ). On the manual press, the tablets were pressed between two horizontal plates until they were crushed. The breaking axial compressive force (Fc) was obtained from the pressure applied and the tablet surface. Equations (5) and (6) were used to calculate the breaking radial/diametral tensile strength σ f d and axial compressive strength σ f c of the tablets [12,16]: where D was tablet diameter (mm) and t was tablet thickness (mm). The breaking force thus determined was used to calculate tablet cohesion (d) and internal friction angle (β) with Equations (7) and (8) [12,16]: Numerical Method The aim of the numerical analysis was to simulate the mechanical, heat, and moisture behaviors of the powder bed during the compaction phase. The simulation was based on coupling a solid mechanical module with the heat and moisture transfer in the building materials module implemented in the finite element analysis solver COMSOL Multiphysics ® . Density-Dependent Drucker-Prager Cap Model The density-dependent DPC model [12] has been used in several studies, which found high agreement between simulated and experimental measurements. The model consists of three surfaces (Figure 5a): - The Mohr-Coulomb shear failure surface (F s ), representing shear flow, defined as -The transition surface (F t ), representing a mathematical smoothing surface, defined as Pharmaceutics 2023, 15, 1652 10 of 33 E and Poisson's ratio ν can be determined [11,64]: Moisture and Heat Transport Simulation Model During the tableting process, heat is generated due to irreversible deformation and friction phenomena. The heat diffuses through the particles and increases the temperature of the entire powder bed during the compaction phase. The behavior of the inherent moisture of the powder particles remains unknown. Moisture content distribution and evolution can be investigated using the heat and moisture transport inbuilding materials module in COMSOL. The heat, air, and moisture (HAM) model is widely used for porous materials [65,66] and granular materials [67]. It combines heat transfer, air diffusion, and moisture transport through porous materials. The heat generated diffuses in the powder by a combination of conduction (through the bulk particles) and convection (through interparticle air) and through the tableting tools by conduction. The powder bed is a granular medium with interparticle pores. Moisture can exist in pores in two thermodynamic states of matter-vapor and liquid [66]. The main mechanisms associated to moisture transfer are - The cap yield surface (F c ) defined as where β is the material friction angle ( • ), d is its cohesion (Pa), and p and q are hydrostatic pressure stress (Pa) and von Mises equivalent stress (Pa), respectively, which are expressed as where σ r and σ z are punch stress and die stress, respectively, and I 1 and J 2 are the first stress invariant and second deviatoric stress invariant, respectively. The parameters needed for the elliptic cap are: -The cap evolution parameter, Pa, that represents the volumetric plastic strain-driven hardening/softening, defined as where f = (1 + α − α/cosβ) and α is a small number (typically 0.01-0.05) used to define a smooth transition surface between the shear failure surface and the elliptic cap. P B is the pressure at maximum compression. -The cap eccentricity, R, is a material parameter between 0.0001 and 1000 that controls the shape of the cap and is defined as -The hardening/softening law is a user-defined piecewise linear function relating the hydrostatic compression yield stress, P b , and the corresponding volumetric inelastic/plastic strain, ε p v : Volumetric plastic strain can be expressed as ε ρ true is the current relative density and RD 0 is the initial relative density. ρ bulk is the tablet's apparent density (kg/m 3 ) and ρ true is particle true density (kg/m 3 ). Density-Dependent Drucker-Prager Cap Model Implemented in COMSOL Multiphyisics ® In the present study, the DPC model version of COMSOL Multiphysics ® , which uses two yield surfaces, was used [63]. In COMSOL, the Drucker-Prager cone (shear failure surface) always meets the yield cap surface at the point of tangency. Hence, no transition zone is required between the shear failure cone and the elliptical cap, so there is a unique and smooth transition between the two surfaces ( Figure 5b). The surfaces were defined based on the first stress invariant I 1 and the second deviatoric stress invariant J 2 as follows. - The Drucker-Prager yield function, Fs: are the Drucker-Prager parameters, β is the internal friction angle, and d is cohesion. - The elliptical cap surface, F c : where J a is the ordinate on the √ J 2 axis at I 1 = I a . Here, the hardening law is defined as: where p b0 is the initial location of the cap, K iso is the isotropic hardening modulus, and ε pvol,max is the maximum volumetric plastic strain. The cap eccentricity for R i given by: where The hardening law in the density-dependent DPC model and that in COMSOL are different; therefore, the parameters K iso and ε pvol,max were chosen in such a way that they approximatively fitted the hardening law curve obtained from the calibrated DPC model. The important permanent deformation of the particles during compression was considered and a large plastic strain plasticity model was used. Hence, the relationship between current relative density (RD) and volumetric plastic strain is given by: where J p is the plastic volume ratio. Nonlinear Elastic Law The unloading portion of the compression curve of the non-lubricated pharmaceutical powder shows nonlinear behavior, which can be attributed to the dilatation of the tablet during the unloading phase and is described using a nonlinear elasticity law. The density-dependent elastic modules that characterize elastic behavior, such as the bulk modulus K and the shear modulus G, are expressed in such a way as to avoid path dependence [12]: where I 1 is the first stress invariant, √ J 2 is the stress invariant, and ρ is the compact apparent density. When considering the linear portions of the unloading curve, the portion between the maximum compression stress at point B localized on the cap surface and point C where the hydrostatic state is reached, i.e., σ r = σ z , can be used to determine the elastic modulus using two equations [11]. With the elastic modulus thus determined, the density-dependent Young's modulus E and Poisson's ratio ν can be determined [11,64]: Moisture and Heat Transport Simulation Model During the tableting process, heat is generated due to irreversible deformation and friction phenomena. The heat diffuses through the particles and increases the temperature of the entire powder bed during the compaction phase. The behavior of the inherent moisture of the powder particles remains unknown. Moisture content distribution and evolution can be investigated using the heat and moisture transport inbuilding materials module in COMSOL. The heat, air, and moisture (HAM) model is widely used for porous materials [65,66] and granular materials [67]. It combines heat transfer, air diffusion, and moisture transport through porous materials. The heat generated diffuses in the powder by a combination of conduction (through the bulk particles) and convection (through interparticle air) and through the tableting tools by conduction. The powder bed is a granular medium with interparticle pores. Moisture can exist in pores in two thermodynamic states of matter-vapor and liquid [66]. The main mechanisms associated to moisture transfer are vapor diffusion or capillary suction, or both. The material has a unique equilibrium moisture content characteristic curve that covers the hygroscopic and capillary water regions. These regions are commonly referred to as sorption isotherm and retention curves. Inside the hygroscopic region, the pores are mainly filled with water vapor, and consequently, moisture transport is mainly achieved by vapor diffusion. Liquid water transport is possible when pores are filled with liquid water. This flow mechanism is very active in the capillary water region, where the relative humidity is over 95%. Both vapor and liquid transport can coexist at the higher end of the hygroscopic region, where vapor diffusion and capillary suction are active in large and small pores, respectively. Water vapor diffuses into open pores and condenses on the capillary meniscus, whereas water evaporates into the next open pore space at the other end of the meniscus. This implies that the diffusion path is reduced, thereby increasing the moisture evaporation rate. The moisture balance can be expressed using several thermodynamic potentials dependent on transport-diffusion partial differential equations. Relative humidity and tem- perature are the dependent variables in the mathematical model. The equation that governs moisture transfer is as follows [57,66]: It represents the sum of vapor diffusion flux and capillary moisture flux. Their respective representative equations are: where ϕ is relative humidity (%), δ p is vapor permeability (m 2 /s), p sat is saturation pressure (Pa), w is moisture capacity (kg/kg), D w is moisture diffusivity (m/s 2 ), G is the generic moisture source (kg/m 3 ), and T is temperature (K). The moisture source is water molecules adsorbed on the powder surface and pores that evaporated during the tableting process. The coupled thermal analysis is solved by the following form of the energy balance equation [20,66]: where ρ is effective density (kg/m 3 ), C p is specific heat at constant pressure (J/(kg·K)), λ is thermal conductivity (W/(m·K)), L v (J/kg) is heat of evaporation, and Q is the heat source (W/m 3 ). The energy balance is applied under the assumption of local thermodynamic equilibrium (LTE). In the first term of Equation (31), effective thermal capacity is defined as follows [20]: ρC p e f f = ρ s C p,s + wC p,w where ρC p e f f is effective volumetric heat capacity at constant pressure, defined to account for both solid matrix and moisture properties; C p,s and ρ s are the density and specific heat capacity of the dry solid; w is water content given by the moisture storage function, i.e., the moisture sorption curve for the chosen material; C p,w is water heat capacity at constant pressure; and λ e f f is the effective thermal conductivity function of the solid matrix and moisture properties [20,66]: where λ s is the thermal conductivity W/(m·K) of the dry solid and b (dimensionless) is a thermal conductivity supplement due to the water content. Equation (31) is composed of two heat sources: sensible heat due to moisture content variation in the powder, expressed as vapor diffusion flow multiplied by latent heat of evaporation (L v ); and the heat source caused by irreversible deformation, labeled as Q. The heat source Q relates to plastic strains, stress, interparticle friction, and friction with die walls, and is given as [18,20]: where q 1 is the heat generated by interparticle friction and permanent deformation, and q 2 is the heat generated by particle-wall friction. Ninety percent of the energy needed to permanently deform the particles is assumed to dissipate as heat in the powder bed [18,20]. q 1 is defined as where ξ = 0.9 represents the irreversibly deformed heat fraction [18,20]. The heat generated by friction is defined as where µ is the wall friction coefficient (-), A represents the interacting area between the powder and die wall (m 2 ), σ nn is the local normal stress (Pa), and υ is the norm of the local slip velocity at the interface between the powder and the die (m/s). Friction heat is an interface heat between the die wall and the powder compact. The portion of heat transferred to the powder compact can be calculated with the weighting factor proposed in [68][69][70] and adopted by [20]. Hence, friction heat can be separated into two parts: where η is the weighting factor, which is estimated by Equation (39): where λ powder , λ die , α die , and α powder are powder and die thermal conductivities and powder and die thermal diffusivities, respectively. Method of Calculation of Elastic Parameters and Drucker-Prager Cap Model Parameters The calibration procedure used in this study to determine the DPC model parameters and elastic parameters is described by [12]. Elastic parameters, such as Young's modulus and Poisson coefficient, were determined from the unloading curve. The DPC model parameters, cohesion (d) and internal friction angle (β) were first obtained with diametral and axial compressive tests, then cap eccentricity (R) and hydrostatic compression yield stress (P b ) were calculated with the d and β obtained. Finite Element Model The FEM simulations of the compaction phase of the tableting process were performed on the commercial COMSOL Multiphysics ® 5.6 finite element analysis solver, which implemented the DPC plasticity model and HAM model. MCC VIVAPUR PH101 was used as the model powder. A flat-faced, cylindrical compact geometry was modeled with the compact/powder represented by a four-node, 2D axisymmetric model, as shown in Figure 6. The powder was modeled as continuum, isotropic, poroplastic, and deformable media in which the pores were uniformly distributed. The flat-faced upper/lower punches and the cylindrical die were modeled as analytical rigid bodies without any deformation. The slip velocity boundary condition between the powder and the die wall was set to simulate the rate at which compression occurred. The finite element simulations were modeled as quasi-static; therefore, punch velocity and dwell time were disregarded. Only the upper punch was set to move downward to simulate one side compression. The initial powder fill height was 12 mm in a die with a radius of 5.5 mm, and the powder was compacted with the upper punch to a final height of 4.5 mm during the compaction stage. In the analysis, only the compaction phase of the quasi-static compaction process was simulated. The wall friction effect was considered through the adoption of a Coulombic boundary condition at the powder-die wall and powder-punch interfaces. Since MCC PH101 was not lubricated, a static Coulombic friction of 0.21 was adopted from [12]. was set to move downward to simulate one side compression. The initial powder fill height was 12 mm in a die with a radius of 5.5 mm, and the powder was compacted with the upper punch to a final height of 4.5 mm during the compaction stage. In the analysis, only the compaction phase of the quasi-static compaction process was simulated. The wall friction effect was considered through the adoption of a Coulombic boundary condition at the powder-die wall and powder-punch interfaces. Since MCC PH101 was not lubricated, a static Coulombic friction of 0.21 was adopted from [12]. The heat was set to dissipate from the system through a heat flux at the interface between the die wall and the upper and lower punches. At the powder-die wall interface, a boundary heat source was set with a weighting factor function of the relative density of the powder bed. The weighting factor was calculated with equation 39, assuming that the thermal conductivity coefficient of the powder bed and the thermal conductivity and diffusivity The heat was set to dissipate from the system through a heat flux at the interface between the die wall and the upper and lower punches. At the powder-die wall interface, a boundary heat source was set with a weighting factor function of the relative density of the powder bed. The weighting factor was calculated with Equation (39), assuming that the thermal conductivity coefficient of the powder bed and the thermal conductivity and diffusivity of the die were constant during compaction. From the weighting factor obtained, the portion (θ) of the interfacial frictional heat that transfers to the powder compact by conduction at each level of compaction is shown in Figure 8. As the relative density of the powder bed increases, the portion of heat that diffuses into the powder bed also increases. The thermal diffusivity, α powder = λ powder ρC p , of the powder bed (where λ powder , ρ, and C p are thermal conductivity, apparent density, and heat capacity, respectively, of the powder bed) decreases when apparent density increases. Hence, the weighting factor, η, which is proportional to the square root of the thermal diffusivity of the powder bed, decreases during compaction and θ increases (Figure 8). The fraction of irreversible work converted into heat has been estimated to be in the range of 80-100% [20,69]. Hence, in this study, 20% of the input mechanical energy inside the powder bed was assumed to be stored as irreversible particle deformation and 80% was assumed to be converted into heat. The thermal expansion coefficient, , of the powder was adopted from [18]. The heat was set to dissipate from the system through a heat flux at the interface between the die wall and the upper and lower punches. At the powder-die wall interface, a boundary heat source was set with a weighting factor function of the relative density of the powder bed. The weighting factor was calculated with equation 39, assuming that the thermal conductivity coefficient of the powder bed and the thermal conductivity and diffusivity of the die were constant during compaction. From the weighting factor obtained, the portion (θ) of the interfacial frictional heat that transfers to the powder compact by conduction at each level of compaction is shown in Figure 8. As the relative density of the powder bed increases, the portion of heat that diffuses into the powder bed also increases. The thermal diffusivity, = , of the powder bed (where λpowder, , and Cp are thermal conductivity, apparent density, and heat capacity, respectively, of the powder bed) decreases when apparent density increases. Hence, the weighting factor, Ƞ, which is proportional to the square root of the thermal diffusivity of the powder bed, decreases during compaction and θ increases (Figure 8). The fraction of irreversible work converted into heat has been estimated to be in the range of 80-100% [20,69]. Hence, in this study, 20% of the input mechanical energy inside the powder bed was assumed to be stored as irreversible particle deformation and 80% was assumed to be converted into heat. The thermal expansion coefficient, = 1.0 × 10 [°C ], of the powder was adopted from [18]. The coefficient, U, for heat transfer between the compression tools and powder compact was determined with the thermal resistance method [71]. The heat transfer between the powder and punch, U powder-die , and between the powder and die wall, U powder-punch , were equal to 32 W/(m 2 .K) and 18 W/(m 2 .K), respectively (See Supplementary Material S3). In the proposed model, the moisture was set to flow toward the external environment from the powder compact through the upper punch interface. The powder compact-die wall and powder compact-lower punch interfaces were considered closed, so no moisture could evaporate from those interfaces to the surrounding environment. The experimental powder moisture sorption or moisture retention curve was used in the model, as the moisture content of the powder bed is a function of the relative humidity of the powder during compaction. The moisture transport coefficients of MCC Vivapur PH101 used in the HAM model are given in our recent article [72]. The initial temperature of the powder was set to ambient-21.4 ± 0.2 • C. The moisture content of the powder was set at 4.5%. NIR Penetration Depth This work aimed to study the surface moisture of pharmaceutical tablets; therefore, the penetration depth of the NIR radiation into a tablet was estimated. The NIR sensor was used in diffuse reflectance measurement mode. Figure 9 shows the SNV + SG (17, 2, 0) transformed NIR spectra of the APAP tablet, MCC disk, regular paper, and NIR spectra of the MCC disk and sheet of regular paper superimposed on the APAP tablet surface. Pretreatment was applied to remove additive and multiplicative effects and noise from the spectra. The aim of adding several thin layers at the top of the APAP tablet was to determine the critical thickness at which the APAP tablet cannot be detected by NIR radiation. APAP absorbed NIR radiation at a wavelength of 1135 nm due to the C-H stretching second overtone in the aromatic ring [73]. Cellulose (MCC and regular paper) absorbed NIR radiation at 1220 nm due to hydrogen bond bending and at 1450 nm due to the first overtone of O-H stretching [73]. The absorption peak of APAP at 1135 nm vanished when five sheets of regular paper was superimposed at the top of APAP tablet, which corresponds to a thickness of 0.33 mm or 330 µm (Figure 9a, pink spectrum). Therefore, the NIR-radiation maximum penetration depth can be estimated to be less than 330 µm when a pressed cellulose such as regular paper is scanned. To confirm the results, an MCC disk of 0.45 mm (38% of porosity and water content of 4.25%) was placed at the top of the APAP tablet and the APAP absorption peak at 1135 nm was not detected (Figure 9b, green spectrum). The small air gap between the APAP tablet and the paper sheets or the MCC disk did not affect the NIR signal because the tests were done in dry air (RH = 27.4%). The increase in moisture content in granular material was found to decrease the reflectance intensity of NIR light [74], because as the moisture content increases, more light is absorbed by the water molecules. The absorption of the incident NIR light by the water molecules at the surface of the scanned sample reduces its incident intensity, thereby decreasing its penetration depth into the sample according to the Beer-Lambert law [75]. Based on these results, the use of NIR in diffuse reflectance mode in this study can be considered to penetrate less than 450 µm into the MCC tablet, and the penetration depth will further decrease as the surface moisture content increases. Hence, only the tablet surface moisture will be captured during the analysis. Tablet Compression Curve The compression stress-strain curve obtained for MCC VIVAPUR PH101 in this study at different densities of compaction is shown in Figure 10a. The curve obtained is characteristic of the behavior of the MCC powder under pressure (Figure 10b) [12,16]. The stress-strain curve can be split into two parts-the loading and unloading phases. The Tablet Compression Curve The compression stress-strain curve obtained for MCC VIVAPUR PH101 in this study at different densities of compaction is shown in Figure 10a. The curve obtained is characteristic of the behavior of the MCC powder under pressure (Figure 10b) [12,16]. The stress-strain curve can be split into two parts-the loading and unloading phases. The loading phase, line AB, describes the volume reduction of the powder bed. During volume reduction, two main phenomena take place for soft particles such as MCC-the escape of excess air from the powder bed interstices and densification. The latter can be explained by three complex mechanisms [16,76]. The first mechanism takes place at the early stage of compaction, during which the particles rearrange due to their inherent movement. On contact, the tangential and normal components of the contact force between particles take part in the organization and elastic-plastic deformation of the particles, respectively. The second mechanism is the elastic-plastic deformation that occurs due to contact interactions between neighboring particles, leading to hardening. The third mechanism is described by the sharp increase in material flow resistance due to material strain hardening. The unloading phase, line BE in Figure 10, is where the upper punch is retracted, and the tablet tends to revert to expand due to its elastic recovery. During the first step, line BD, the material tends to release excess energy stored in the compact that does not participate in the microstructure of the compact. At the end of the unloading phase, line DE, the powder exhibits strong nonlinear behavior due to dilatation of the particles in the compact. The stress-strain curve obtained is consistent with previous results [77]. scribed by the sharp increase in material flow resistance due to material strain hardening. The unloading phase, line BE in Figure 10, is where the upper punch is retracted, and the tablet tends to revert to expand due to its elastic recovery. During the first step, line BD, the material tends to release excess energy stored in the compact that does not participate in the microstructure of the compact. At the end of the unloading phase, line DE, the powder exhibits strong nonlinear behavior due to dilatation of the particles in the compact. The stress-strain curve obtained is consistent with previous results [77]. Relative Density Evolution and Distribution Mechanical simulation of the compaction phase of the tableting process resulted in the evolution of the average relative density of the compact and its distribution on the 3D volume of the die. Figure 11a shows the onset of compaction with a uniform relative powder density of 0.26. During compaction, the upper edges of the powder bed were denser than the rest of the powder bed and the lower edges were the least dense (Figure 11b-d); similar results were obtained in other studies [20,29,41]. This can be explained by die wall friction, which induces a stress gradient inside the powder bed [38] and prevents the powder from sliding along the interfaces [78]. At the end of the compaction, the relative density of the powder was somewhat uniformly distributed, and the top-right corner of the tablet showed a relatively denser region, which was confirmed with X-ray tomography analysis [27,79]. This highly dense region was found to be the point of origin of the crack that initiated tablet lamination [27,79]. Relative Density Evolution and Distribution Mechanical simulation of the compaction phase of the tableting process resulted in the evolution of the average relative density of the compact and its distribution on the 3D volume of the die. Figure 11a shows the onset of compaction with a uniform relative powder density of 0.26. During compaction, the upper edges of the powder bed were denser than the rest of the powder bed and the lower edges were the least dense (Figure 11b-d); similar results were obtained in other studies [20,29,41]. This can be explained by die wall friction, which induces a stress gradient inside the powder bed [38] and prevents the powder from sliding along the interfaces [78]. At the end of the compaction, the relative density of the powder was somewhat uniformly distributed, and the top-right corner of the tablet showed a relatively denser region, which was confirmed with X-ray tomography analysis [27,79]. This highly dense region was found to be the point of origin of the crack that initiated tablet lamination [27,79]. To validate the mechanical simulation, the average relative density measured for the out-of-die tablet and the relative density obtained from the simulation were compared ( Figure 12). The variation in relative density of the tablet with upper punch displacement is presented. Good agreement between simulated and measured results was observed for lower compaction, i.e., low upper punch penetration. For higher compaction i.e., high upper punch penetration (from 4 mm), the simulated relative density was slightly higher than the measured values. These observations are due to the fact that at lower punch penetration, i.e., lower compaction force, particle deformation is not high enough to cause high relaxation of the out-of-die tablet; hence the density calculated from tablet dimensions is relatively close to that obtained for the in-die tablet in the simulation. At higher penetration, i.e., higher compaction force, particle deformation becomes important such that relaxation of the out-of-die tablet also becomes more important, highly impacting the out-of-die tablet dimensions and consequently its relative density. To validate the mechanical simulation, the average relative density measured for the out-of-die tablet and the relative density obtained from the simulation were compared ( Figure 12). The variation in relative density of the tablet with upper punch displacement is presented. Good agreement between simulated and measured results was observed for lower compaction, i.e., low upper punch penetration. For higher compaction i.e., high upper punch penetration (from 4 mm), the simulated relative density was slightly higher than the measured values. These observations are due to the fact that at lower punch penetration, i.e., lower compaction force, particle deformation is not high enough to cause high relaxation of the out-of-die tablet; hence the density calculated from tablet dimensions is relatively close to that obtained for the in-die tablet in the simulation. At higher penetration, i.e., higher compaction force, particle deformation becomes important such that relaxation of the out-of-die tablet also becomes more important, highly impacting the out-of-die tablet dimensions and consequently its relative density. Temperature Evolution and Distribution • Simulated temperature of tablets The evolution of the 3D powder bed temperature and its distribution during compaction are shown in Figure 13a-d. At the beginning of the compaction phase (Figure 13b), Temperature Evolution and Distribution • Simulated temperature of tablets The evolution of the 3D powder bed temperature and its distribution during compaction are shown in Figure 13a-d. At the beginning of the compaction phase (Figure 13b), the temperature increases from 21.4 • C (initial loose powder temperature) to around 24.5 • C at the powder bed-die wall interface and around 22.4 • C at the surface in contact with the punches. As compaction continues (Figure 13c), the temperature continues to increase and the temperature at the punch-powder bed interfaces remains lower than the temperatures at the center of the powder bed and at the die wall-powder bed interface. At the end of compaction (Figure 13d), the same temperature distribution was observed in all interfaces, i.e., punch-tablet and die wall-tablet (about 40 • C), but the temperature at the center of the tablet remained higher (about 46 • C). However, the edge of the tablet showed the lowest temperature (about 36 • C). The simulation of the entire compaction phase showed that heat was generated mostly at the tablet-die wall interface and the center of the tablet, as shown in other simulations [18,20]. The heat generated in the tablet flowed from the center to the outer layers by conduction within the tablet and was dissipated from the tablet by conduction through the die wall and punches. The low temperature at the edges of the tablet may be correlated with the higher density of these regions (Figure 11). High density induces higher thermal effusivity at the edges of the tablet and thus lower temperatures in these regions. The overall temperature distribution in the tablet at the end of compaction was consistent with previous studies [18,20,80]. • Measured surface and peripheral temperatures of the tablets The out-of-die tablet surface temperature and peripheral temperature were measured with a thermal infrared camera one second after ejection. Figure 14 shows the thermal infrared camera images of the compacted tablet at increasing relative density. The • Measured surface and peripheral temperatures of the tablets The out-of-die tablet surface temperature and peripheral temperature were measured with a thermal infrared camera one second after ejection. Figure 14 shows the thermal infrared camera images of the compacted tablet at increasing relative density. The images show a clear increase in temperature during compaction. The rise in temperature is explained by the heat generated by the irreversible deformation of powder particles, interparticle friction, and particle-die wall friction [20,77,81]. Furthermore, the out-ofdie tablet's surface temperature was lower than its peripheral temperature. The higher temperature at the periphery may be due to the additional heat generated at the tablet-die wall interface through friction during the ejection phase [18]. By increasing the level of compaction of the powder bed, the temperature of all parts of the tablet increased because more friction and irreversible deformation phenomena occurred as the tableting pressure increased. Similar results were obtained by other studies [18,77]. To investigate the temperature evolution during a single compaction phase, the temperatures of out-of-die tablets obtained at increasing levels of compaction, i.e., increasing relative density, were measured. Figure 15 shows the temperature evolution with increasing relative density. Both peripheral and surface temperatures increased with increasing compaction. As the level of compaction increases, the contact pressure between particles increases [76,81]; hence, more heat is generated and causes the temperature in parts of the tablet to increase. The peripheral tablet temperature is higher than the surface temperature at all levels of compaction, which may be explained by (1) the additional frictional heat generated at the tablet-die wall interface during ejection and (2) the difference in the rate of heat loss between the surface and periphery of the tablet. The temperature evolution curve can be separated into two distinctive phases. (1) A low-compression pressure phase (21.4% < relative density < 35%) in which a small temperature change, from 21.4 °C to 24 °C, is observed. During this phase, particle reorganization and interparticle friction dominate [16], which may explain the low rise in temperature. (2) A high-compression pressure phase (35% < relative density < 80%) with a high temperature change, from 24 °C to 38 °C, is observed. During this phase, particles are close enough to deform irreversibly on contact and the consolidation between particles induces more radial pressure, which may cause stronger wall friction [76,81]. Therefore, a higher temperature change is observed during the second phase. These two phases observed during temperature evolu- To investigate the temperature evolution during a single compaction phase, the temperatures of out-of-die tablets obtained at increasing levels of compaction, i.e., increasing relative density, were measured. Figure 15 shows the temperature evolution with increasing relative density. Both peripheral and surface temperatures increased with increasing compaction. As the level of compaction increases, the contact pressure between particles increases [76,81]; hence, more heat is generated and causes the temperature in parts of the tablet to increase. The peripheral tablet temperature is higher than the surface temperature at all levels of compaction, which may be explained by (1) the additional frictional heat generated at the tablet-die wall interface during ejection and (2) the difference in the rate of heat loss between the surface and periphery of the tablet. The temperature evolution curve can be separated into two distinctive phases. (1) A low-compression pressure phase (21.4% < relative density < 35%) in which a small temperature change, from 21.4 • C to 24 • C, is observed. During this phase, particle reorganization and interparticle friction dominate [16], which may explain the low rise in temperature. (2) A high-compression pressure phase (35% < relative density < 80%) with a high temperature change, from 24 • C to 38 • C, is observed. During this phase, particles are close enough to deform irreversibly on contact and the consolidation between particles induces more radial pressure, which may cause stronger wall friction [76,81]. Therefore, a higher temperature change is observed during the second phase. These two phases observed during temperature evolution were consistent with the loading phase curve (Figure 10a), where axial pressure is low in the region of 21.4% < relative density < 35%, followed by a sharp increase in pressure from 35% to about 80% relative density. Furthermore, as the number of tableting runs increased, tablet temperature increased gradually ( Figure 16). As the number of compressions runs increases, compression tools become hotter and transfer additional heat to powder particles, leading to gradual temperature rise [77,80]. The simulation results were compared to the measured temperature for validation purposes. Figure 17 shows the measured temperature of the ejected tablet compared to the simulated temperature of the in-die tablet as a function of the level of compaction, which is represented by punch displacement in the die. The measured and simulated tem- The simulation results were compared to the measured temperature for validation purposes. Figure 17 shows the measured temperature of the ejected tablet compared to the simulated temperature of the in-die tablet as a function of the level of compaction, which is represented by punch displacement in the die. The measured and simulated tem- The simulation results were compared to the measured temperature for validation purposes. Figure 17 shows the measured temperature of the ejected tablet compared to the simulated temperature of the in-die tablet as a function of the level of compaction, which is represented by punch displacement in the die. The measured and simulated temperatures increased with the level of compaction. In the cases of both measured and simulated temperatures, the tablet's peripheral temperature was higher than its surface temperature. In the simulation results, the difference in temperature between the surface and the periphery may be explained by the rate of heat generation and dissipation at each interface during compaction. In the experimental results, the additional heat generated during ejection may be responsible for the difference in temperature. interface during compaction. In the experimental results, the additional heat generated during ejection may be responsible for the difference in temperature. Tablet Moisture Content Evolution and Distribution during Compaction At low upper punch penetration (upper punch displacement of <2 mm), the simulated results and the measured results matched. However, as the punch went deeper into the die, the simulated temperature of the in-die tablets became gradually higher than the measured temperature of the ejected tablets. Because the measured temperatures were obtained from ejected tablets, during the ejection phase, water molecules could have evaporated from the tablet surface before the measurements were taken. The latent heat used to evaporate the water molecules could have produced a cooling effect on the tablet. For instance, at maximum punch penetration (about 7.83 mm in the die), the difference between the average simulated temperature and the average measured temperature was about 15 °C. Assuming that water evaporated from the top 2 mm of the tablet, the corresponding amount of water evaporated is about 1 mg. With the initial amount of moisture being 43 mg of water per 1 g of powder, the evaporation of 1 mg during tablet ejection is plausible. The agreement between the simulated and measured temperatures at low punch penetration distance may be explained by the low water evaporation rate due to the low temperature rise. A decrease in tablet temperature during ejection was also found in other studies [18,77]. Figure 18 shows the moisture sorption isotherm curve of the MCC VIVAPUR PH101 used in this study. The curve has a type II and/or III shape according to the Brunauer classification [82], which is obtained for highly porous material, such as the MCC PH101 used in this study, which is well known to have high intraparticle porosity. The type II shape indicates that adsorption begins with the first monolayer between the powder and Tablet Moisture Content Evolution and Distribution during Compaction At low upper punch penetration (upper punch displacement of <2 mm), the simulated results and the measured results matched. However, as the punch went deeper into the die, the simulated temperature of the in-die tablets became gradually higher than the measured temperature of the ejected tablets. Because the measured temperatures were obtained from ejected tablets, during the ejection phase, water molecules could have evaporated from the tablet surface before the measurements were taken. The latent heat used to evaporate the water molecules could have produced a cooling effect on the tablet. For instance, at maximum punch penetration (about 7.83 mm in the die), the difference between the average simulated temperature and the average measured temperature was about 15 • C. Assuming that water evaporated from the top 2 mm of the tablet, the corresponding amount of water evaporated is about 1 mg. With the initial amount of moisture being 43 mg of water per 1 g of powder, the evaporation of 1 mg during tablet ejection is plausible. The agreement between the simulated and measured temperatures at low punch penetration distance may be explained by the low water evaporation rate due to the low temperature rise. A decrease in tablet temperature during ejection was also found in other studies [18,77]. Figure 18 shows the moisture sorption isotherm curve of the MCC VIVAPUR PH101 used in this study. The curve has a type II and/or III shape according to the Brunauer classification [82], which is obtained for highly porous material, such as the MCC PH101 used in this study, which is well known to have high intraparticle porosity. The type II shape indicates that adsorption begins with the first monolayer between the powder and water molecules at lower humidity, followed by multilayer adsorption between water molecules, and finally leading to capillary condensation at higher humidity [47]. The moisture sorption curve was used in COMSOL Multiphysics ® software to follow the water content in the powder bed as the compaction level increased. Figure 19 shows the moisture distribution in the tablet during the compaction phase. During compaction, the heat generated can be transmitted as thermal energy to water molecules in the form of kinetic energy. The water molecules that have enough kinetic energy can diffuse between particles and intraparticle pores in the powder bed [27,72,83] due to the water vapor pressure gradient between the powder bed and the surrounding environment. The air contained in the powder bed diffuses radially from the center to the periphery of the tablet and escapes from the powder bed at the edges of the tablets [27] during compaction. The outflowing air carrying water vapor largely contributes to the path of water vapor diffusion. Consequently, the water content of the tablet is higher at the periphery than at the center (Figure 19b-d). Furthermore, the higher temperature of the center of the tablet can lead to higher kinetic energy in the water molecules in this region. Hence, the high heat coupled with the air path may explain the lower water content in the center of the tablet (Figure 19d). The tablet edges appeared moister because of the higher powder density, which can slow down the rate of diffusion of water molecules to the surrounding environment. During compaction, the overall water content density of the powder bed decreased from 20.85 kg of water/m 3 of powder to about 8 kg of water/m 3 of powder. The decrease in water content may be explained by the evaporation of water molecules from the powder compact or tablet to the surrounding environment, particularly the interfaces. Because the simulation in this study allowed water molecules to evaporate from the tablet surface, no accumulation was observed on the tablet surface. Figure 19 shows the moisture distribution in the tablet during the compaction phase. During compaction, the heat generated can be transmitted as thermal energy to water molecules in the form of kinetic energy. The water molecules that have enough kinetic energy can diffuse between particles and intraparticle pores in the powder bed [27,72,83] due to the water vapor pressure gradient between the powder bed and the surrounding environment. The air contained in the powder bed diffuses radially from the center to the periphery of the tablet and escapes from the powder bed at the edges of the tablets [27] during compaction. The outflowing air carrying water vapor largely contributes to the path of water vapor diffusion. Consequently, the water content of the tablet is higher at the periphery than at the center (Figure 19b-d). Furthermore, the higher temperature of the center of the tablet can lead to higher kinetic energy in the water molecules in this region. Hence, the high heat coupled with the air path may explain the lower water content in the center of the tablet (Figure 19d). The tablet edges appeared moister because of the higher powder density, which can slow down the rate of diffusion of water molecules to the surrounding environment. During compaction, the overall water content density of the powder bed decreased from 20.85 kg of water/m 3 of powder to about 8 kg of water/m 3 of powder. The decrease in water content may be explained by the evaporation of water molecules from the powder compact or tablet to the surrounding environment, particularly the interfaces. Because the simulation in this study allowed water molecules to evaporate from the tablet surface, no accumulation was observed on the tablet surface. Water molecules that diffuse from the powder bed can be trapped at the interfaces between the compression tools and powder bed during compaction, which may cause water vapor to accumulate in these regions. To verify this assumption, an NIR sensor was used to measure the amount of moisture at the tablet surface just after ejection. Figure 18a shows the raw noisy NIR spectra collected for tablets with a water content of 4-14.5%. The raw spectra without pretreatment show a water absorption band in the ranges of 1190-1220 nm and 1400-1690 nm. After pretreatment, with the combination of EMSC and SG (17, 1, 0), weak and strong water absorption at 1202 nm and 1450 nm, respectively, was observed ( Figure 20b). The PCA score plot coupled with the Hotelling T 2 ellipse at a 95% confidence interval created with the pretreated data does not show outliers (Figure 21a). The model with three principal components explained 98% of the total variance of X (wavelength) and 98.5% of the total variance of Y (water content). A calibrated model (Figure 21b) was obtained with a regression coefficient in calibration (R 2 ) of 97% and a root-mean-square error in calibration (RMSEC) of 0.14%. In the K-folds cross-validation, an R 2 of 0.96% and RMSECV of 0.20% were obtained. Water molecules that diffuse from the powder bed can be trapped at the interfaces between the compression tools and powder bed during compaction, which may cause water vapor to accumulate in these regions. To verify this assumption, an NIR sensor was used to measure the amount of moisture at the tablet surface just after ejection. Figure 18a shows the raw noisy NIR spectra collected for tablets with a water content of 4-14.5%. The raw spectra without pretreatment show a water absorption band in the ranges of 1190-1220 nm and 1400-1690 nm. After pretreatment, with the combination of EMSC and SG (17, 1, 0), weak and strong water absorption at 1202 nm and 1450 nm, respectively, was observed ( Figure 20b). The PCA score plot coupled with the Hotelling T 2 ellipse at a 95% confidence interval created with the pretreated data does not show outliers (Figure 21a). The model with three principal components explained 98% of the total variance of X (wavelength) and 98.5% of the total variance of Y (water content). A calibrated model (Figure 21b) was obtained with a regression coefficient in calibration (R 2 ) of 97% and a root-mean-square error in calibration (RMSEC) of 0.14%. In the K-folds cross-validation, an R 2 of 0.96% and RMSECV of 0.20% were obtained. Figure 22 shows the predicted tablet surface moisture content for 10 min of tablet press operation, representing 600 tablets pressed. The tablets had the same compaction relative density of about 65%. All the tablets were compressed from the same batch of powder. Each point on Figure 22 represents the average moisture content of five tablets, with the measurements made every two minutes. Three replicate tests were performed, with two hours between each replicate to allow enough time for the tableting tools to cool down. ANOVA showed that the three replicates were not significantly different (p = 0.96). Figure 22 shows that at 0 min, the moisture level of the powder was the same as the ambient condition of 4.25%. After a half-minute run, the moisture content of the tablet surface increased from the ambient to 9% for the first replicate and 7.5% for the second and third replicates. The difference in moisture content between the replicates may be explained by the rate of moisture evaporation from the tablet surface after the ejection phase, which may vary due to differences in tablet surface temperature between replicates. The increase observed after a half minute of tableting may be attributed to the abrupt increase in tablet temperature (Figure 17), which led to the transfer of enough kinetic energy to water molecules in the tablet's pore channels. Between the half minute and 10 min of tableting runs, the moisture content of the tablet surface decreased for the three replicates ( Figure 22). Figure 22 shows the predicted tablet surface moisture content for 10 min of tablet press operation, representing 600 tablets pressed. The tablets had the same compaction relative density of about 65%. All the tablets were compressed from the same batch of powder. Each point on Figure 22 represents the average moisture content of five tablets, with the measurements made every two minutes. Three replicate tests were performed, with two hours between each replicate to allow enough time for the tableting tools to cool down. ANOVA showed that the three replicates were not significantly different (p = 0.96). Figure 22 shows that at 0 min, the moisture level of the powder was the same as the ambient condition of 4.25%. After a half-minute run, the moisture content of the tablet surface increased from the ambient to 9% for the first replicate and 7.5% for the second and third replicates. The difference in moisture content between the replicates may be explained by the rate of moisture evaporation from the tablet surface after the ejection phase, which may vary due to differences in tablet surface temperature between replicates. The increase observed after a half minute of tableting may be attributed to the abrupt increase in tablet temperature (Figure 17), which led to the transfer of enough kinetic energy to water molecules in the tablet's pore channels. Between the half minute and 10 min of tableting runs, the moisture content of the tablet surface decreased for the three replicates ( Figure 22). Figure 22 shows the predicted tablet surface moisture content for 10 min of tablet press operation, representing 600 tablets pressed. The tablets had the same compaction relative density of about 65%. All the tablets were compressed from the same batch of powder. Each point on Figure 22 represents the average moisture content of five tablets, with the measurements made every two minutes. Three replicate tests were performed, with two hours between each replicate to allow enough time for the tableting tools to cool down. ANOVA showed that the three replicates were not significantly different (p = 0.96). Figure 22 shows that at 0 min, the moisture level of the powder was the same as the ambient condition of 4.25%. After a half-minute run, the moisture content of the tablet surface increased from the ambient to 9% for the first replicate and 7.5% for the second and third replicates. The difference in moisture content between the replicates may be explained by the rate of moisture evaporation from the tablet surface after the ejection phase, which may vary due to differences in tablet surface temperature between replicates. The increase observed after a half minute of tableting may be attributed to the abrupt increase in tablet temperature (Figure 17), which led to the transfer of enough kinetic energy to water molecules in the tablet's pore channels. Between the half minute and 10 min of tableting runs, the moisture content of the tablet surface decreased for the three replicates ( Figure 22). The gradual rise in tablet temperature shown in Figure 17 may explain the observed decrease in moisture content, because as the tableting runs increases the evaporation from the tablet surface may become more important. However, the tablet surface temperature increased at a slower rate from the first tablet compression, which consequently led to a slower rate of decrease of the tablet surface moisture content. After 10 min of runs, a steady state was not reached because the tablet temperature was still rising. The gradual rise in tablet temperature shown in Figure 17 may explain the observed decrease in moisture content, because as the tableting runs increases the evaporation from the tablet surface may become more important. However, the tablet surface temperature increased at a slower rate from the first tablet compression, which consequently led to a slower rate of decrease of the tablet surface moisture content. After 10 min of runs, a steady state was not reached because the tablet temperature was still rising. Based on these observations, we may assume that the moisture evaporating from the tablet at the punch and tablet interface accumulates. The water molecules in the water vapor can be physiosorbed through van der Waals forces on the upper punch surface during the dwell time [84][85][86]. Due to the confined space between the powder bed and the upper punch, i.e., the interface, the number of van der Waals interactions between the vapor-phase molecules in that region can increase. Therefore, the first layer adsorbed on the punch surface may attract more water molecules, and multiple layers of water molecules may develop on the punch surface. The higher amount of water vapor evaporating from the tablet may result in water vapor condensation below water vapor saturation pressure, which is known as capillary condensation [87][88][89]. Capillary condensation may occur at the interface between the surface and the tablet during the dwell time. On the punch surface, capillary condensation can be more pronounced in cavities, such as the valleys in surface roughness, logos cavities, and eventual cracks [87][88][89]. Capillary condensation at the punch surface during tableting can lead to capillary adhesion of the tablet surface particles to the upper punch surface [37,90,91]. The tablet's surface particles can adhere to the punch surface when the capillary forces between the punch surface and the particles are stronger than the cohesive forces between the tablet's particles at the surface and the core [92]. Moisture is an important factor that contributes to the sticking of tablets observed during tableting runs [37,41,43]. Granules with higher moisture content led to a higher frequency of sticking [37]. Similarly, tableting parameters such as compression speed increase the occurrence of sticking [93,94]. In addition, compression speed increases the heat generated during tableting [18,23]. Consequently, the greater occurrence of sticking could be attributed to the increased water vapor at the punch-tablet interface due to the higher heat generated with increased speed. Furthermore, punches with surface cavities, such as logos and cracks, due to surface wear have been found to increase sticking [1,35,79], which may be attributed to the capillary condensation in these cavities, where capillary bridges between the punch and particles cause Based on these observations, we may assume that the moisture evaporating from the tablet at the punch and tablet interface accumulates. The water molecules in the water vapor can be physiosorbed through van der Waals forces on the upper punch surface during the dwell time [84][85][86]. Due to the confined space between the powder bed and the upper punch, i.e., the interface, the number of van der Waals interactions between the vapor-phase molecules in that region can increase. Therefore, the first layer adsorbed on the punch surface may attract more water molecules, and multiple layers of water molecules may develop on the punch surface. The higher amount of water vapor evaporating from the tablet may result in water vapor condensation below water vapor saturation pressure, which is known as capillary condensation [87][88][89]. Capillary condensation may occur at the interface between the surface and the tablet during the dwell time. On the punch surface, capillary condensation can be more pronounced in cavities, such as the valleys in surface roughness, logos cavities, and eventual cracks [87][88][89]. Capillary condensation at the punch surface during tableting can lead to capillary adhesion of the tablet surface particles to the upper punch surface [37,90,91]. The tablet's surface particles can adhere to the punch surface when the capillary forces between the punch surface and the particles are stronger than the cohesive forces between the tablet's particles at the surface and the core [92]. Moisture is an important factor that contributes to the sticking of tablets observed during tableting runs [37,41,43]. Granules with higher moisture content led to a higher frequency of sticking [37]. Similarly, tableting parameters such as compression speed increase the occurrence of sticking [93,94]. In addition, compression speed increases the heat generated during tableting [18,23]. Consequently, the greater occurrence of sticking could be attributed to the increased water vapor at the punch-tablet interface due to the higher heat generated with increased speed. Furthermore, punches with surface cavities, such as logos and cracks, due to surface wear have been found to increase sticking [1,35,79], which may be attributed to the capillary condensation in these cavities, where capillary bridges between the punch and particles cause adherence through capillary forces. The use of punches coated with hydrophobic materials has been shown to minimize or diminish the frequency of sticking [1,29]. This may be due to hydrophobic surfaces preventing water molecules from adsorption, leading to the avoidance of capillary condensation. Conclusions Finite element modeling of the compaction phase of the tableting process was performed to investigate the density, temperature, and moisture distribution and evolution of tablets. The DPC and HAM models implemented in COMSOL Multiphysics ® 5.6 software were used. At the end of the compaction phase, (1) relative density tended to be uniform across the tablet, but was higher at the tablet edges, and (2) the temperature of the powder bed increased during compaction and was higher at the center of the tablet than on the surface, which was much cooler due to heat dissipation through the compression tools. These thermomechanical simulation results are in broad agreement with the experimental results of this study and corroborate findings reported in the literature. The simulation results showed that powder moisture evaporated from the powder bed to its surroundings during the compaction phase. The NIR light (wavelengths: 970 nm to 1700 nm) penetration depth was found to be less than 450 µm into the MCC tablet. Hence, the NIR sensor was used to measure the water content of the tablet surface just after ejection, and showed that moisture content was higher at the tablet surface than the loose powder. These results suggest that a fraction of the water molecules evaporating from the powder bed at the powder and punch interfaces accumulates on the tablet surface. In addition, the evaporation rate increased as the tableting runs increased due to the gradual rise in tablet temperature. Based on these observations, we propose that moisture evaporation from the powder bed and accumulation at the punch-tablet interfaces can induce capillary condensation between the tablet and the punch during the dwell time, leading to sticking. The results of this study bring new insights into punch sticking and may contribute to the investigation of other tableting problems. Author Contributions: All authors were involved in the conceptualization, methodology, validation, and formal analysis of this study. Original draft preparation, K.K. All authors were involved in reviewing, editing, and visualization. Supervision, R.G., N.A. and F.G. All authors have read and agreed to the published version of the manuscript.
19,748.4
2023-06-01T00:00:00.000
[ "Materials Science", "Medicine", "Engineering" ]
Molybdenum-Doped ZnO Thin Films Obtained by Spray Pyrolysis A batch of ZnO thin films, pure and doped with molybdenum (up to 2 mol %), were prepared using the spray pyrolysis technique on glass and silicon substrates. The effect of molybdenum concentration on the morphology, structure and optical properties of the films was investigated. X-ray diffraction (XRD) results show a wurtzite polycrystalline crystal structure. The average crystallite size increases from 30 to 80 nm with increasing molybdenum content. Scanning electron microscopy (SEM) images demonstrate a smooth and homogeneous surface with densely spaced nanocrystalline grains. The number of nuclei increases, growing over the entire surface of the substrate with uniform grains, when the molybdenum concentration is increased to 2 mol %. The estimated root mean square (RMS) roughness values for the undoped and doped with 1 mol % and 2 mol % of ZnO thin films, defined by atomic force microscopy (AFM), are 6.12, 23.54 and 23.83 nm, respectively. The increase in Mo concentration contributes to the increase in film transmittance. Introduction In recent times, conductive and transparent ZnO films have gained more attention due to their properties, such as light-emitting semiconductivity, relatively low deposition temperature, as well as having a high value of exciton binding energy, etc. [1].These properties led to the films having found many applications including acoustic sensors, high temperature thermoelectric devices, active emitters in LEDs and laser diodes, TFT for the real-time sensing of gas molecules, etc. [2,3]. Different grow techniques, such as sol-gel, thermal evaporation, chemical vapor deposition (CVD), ultrasonic spray pyrolysis (USP) and magnetron sputtering, could be used to grow thin ZnO films [4][5][6][7][8][9].The spray pyrolysis technique is attractive because it is cost-effective, with an easy experimental setup, and it allows wide-area deposition, etc. [4,10].Zinc oxide is an n-type semiconductor with a wide band gap, high free exciton binding energy (60 meV), wide range of resistivity values and high transparency in the visible region [1].Because of their optical properties, these films are suitable for use as transparent conductive films when they are doped with a proper dopant, reducing the band gap.One of the metal oxide semiconductors is ZnO, whose application in the field of optoelectronics is quite popular [11].Nowadays, the usage of position-sensitive photodetectors has attracted great interest in many areas.These instruments are applicable in devices for the precise measurement of linear displacements, vibration indication, devices for the monitoring of light-emitting objects, devices for determining roughness, etc.Their structure is "semiconductor-thin dielectric-transparent conductor", such as the "Si-SiO 2metal oxide" structure [12]. To date, relatively little attention has been directed towards investigating pure and Mo-doped ZnO thin films compared to SnO 2 thin films, which were obtained by spray Materials 2024, 17, 2164 2 of 9 pyrolysis [13,14].Molybdenum (Mo) is a dopant with the ability to enhance the conductivity and transparency of zinc oxide thin films [15].With a smaller radius (0.062 nm) when compared to Zn (0.083 nm), Mo is highly suitable for doping into the ZnO matrix.It has the ability to donate four electrons to the free carriers owing to the significant valence difference between Mo 6+ ions and substituted Zn 2+ ions, thereby providing sufficient free carriers and influencing the ion scattering effect [1].What is more, Mo exhibits multiple valence states, and that allows the contribution of multiple carriers by a single Mo dopant atom [16].The influence of Mo concentration on the physical properties of ZnO thin films was studied by Swapna and Kumar [17].Our work aims to investigate the growth of thin films using different precursors with a technology we have developed. Experimental Glass and silicon substrates were used for the deposition of undoped and Mo-doped zinc oxide thin films via the spray pyrolysis technique.MoO 3 (Alfa Aesar) and zinc acetate dihydrate (Zn(CH 3 COO) 2 .2H 2 O, 99%, Alfa Aesar) were used as precursors with a volume ratio of 1:4 ethanol and distilled water as a solvent.The solution was stirred for 30 min during which a few drops of acetic acid were added for the prevention of precipitation of zinc hydroxide.To achieve the full solubility of MoO 3 , a few drops of HCl and NH 3 were added and the solution was stirred for another 2 h to obtain a homogenous solution.The solution was maintained at a total concentration of 0.1 M and the concentrations of Mo were 1 mol% and 2 mol%. Before the deposition process started, we cleaned the glass substrates with isopropanol and double-distilled water, while silicon substrates underwent cleaning with HF.Deposition of the thin films was conducted using a spray pyrolysis unit developed by V. Zhelev and P. Petkov, patented under N 4384 U1.Argon served as a carrier gas at a pressure of 1.2 bar.Identical experimental conditions were maintained for undoped and Mo-doped films to investigate the effects of the dopant on structural, morphological, and optical properties. The spray pyrolysis unit's software enabled the adjustment of various process parameters such as target temperature, nozzle amplitude, distance between nozzle and substrate, nozzle speed, and pressure to optimize the deposition process and ensure homogeneous thin films.Optimal values were determined for the distance between the nozzle and substrate (20-22 cm), spray rate (5 mL/min), nozzle amplitude (45 degrees) and deposition temperature 380 • C for both undoped and doped films. Structural characterization was performed using an X-ray diffractometer, model Philips APD-15, with data collected at ambient temperature within the range of 2θ = 20-85 • using Cu-Kα radiation (λ = 1.54178Å).Surface morphology was investigated via scanning electron microscopy (SEM) using a Zeiss Evo 10 operating in secondary electrons mode with an accelerating voltage of 25 keV.Chemical composition analysis was conducted using an energy dispersive spectroscopy (EDS) detector Zeiss Smart EDX.Atomic force microscopy (AFM) was employed with an MFP-3D instrument from Asylum Research (Oxford Instruments) to analyze surface morphology and roughness.Optical properties of the thin films were measured at room temperature using a Shimadzu UV-Vis Spectrophotometer Shimadzu UV-1900i. In order to measure the thickness of the films, we utilized the Zeta-20 3D Optical profiler.This instrument combines a fully integrated microscope with advanced metrology capabilities, enabling precise 3D imaging.Additionally, we employed the thin film analyzer F20 to aid in determining the thickness accurately.To prepare the samples, we etched the films sprayed onto glass substrates using a solution of HCl with distilled water.This method allowed us to obtain reliable measurements of the film's thickness. Structural Studies The XRD measurements indicate that all the films possess a polycrystalline structure with a wurtzite phase and a preferred orientation of the c-axis perpendicular to the sub-strates (refer to Figure 1).Four distinct peaks corresponding to the (100), ( 002), (101), and (110) diffraction planes of ZnO are clearly noticeable.No peaks attributable to molybdenum were detected in the XRD patterns, confirming the absence of additional phases in the Mo-doped ZnO thin films. Upon the addition of molybdenum, the intensities of the (100) and (101) peaks exhibited an increase, while the intensity of the (002) peak revealed a decrease.This trend suggests that the substitution of Zn atoms with isovalent Mo atoms probably induces cell deformation, resulting in changes in the film's crystallographic properties.It is visible that the (100) plane is preferrable for the growth of an undoped Zn film, while this shows a lack of growth in the ( 102) and (110) planes, which can be influenced by the substrate.When doping Mo, the planes (100) and (101) are preferred, which corresponds to the results achieved by D. Zhao and et al. [18].We monitored the decrease in the intensity of the (002) plane with the increase in the Mo-concentration, which might be explained by the fact that the surface energy of the (002) plane is the lowest in the ZnO crystal. Optical Studies The transmission and reflectance spectra of pure and Mo-doped ZnO thin films were measured in the range 300-1000 nm.The films of pure ZnO exhibited the lowest optical transmittance in the visible region as compared to the doped films (Figure 2).The addition of molybdenum to ZnO films improves the transparency of the oxide thin films.In principle, when an additive is introduced into the oxide, its transparency is expected to decrease.The effect we observed may be due to the isovalent substitution of zinc with molybdenum in the wurtzite structure of zinc oxide.The layers absorb in the UV region and begin to transmit at the beginning of the visible spectrum around 350 nm.The onset of transmission, also known as the optical edge, changes with the addition of molybdenum.The change is towards longer wavelengths with the addition of 1% molybdenum and a subsequent shift towards lower wavelengths as the molybdenum content increases to 2%.This shift affects the width of the forbidden zone, the values of which show the same trend.Upon the addition of molybdenum, the intensities of the (100) and (101) peaks exhibited an increase, while the intensity of the (002) peak revealed a decrease.This trend suggests that the substitution of Zn atoms with isovalent Mo atoms probably induces cell deformation, resulting in changes in the film's crystallographic properties.It is visible that the (100) plane is preferrable for the growth of an undoped Zn film, while this shows a lack of growth in the (102) and (110) planes, which can be influenced by the substrate.When doping Mo, the planes (100) and (101) are preferred, which corresponds to the results achieved by D. Zhao and et al. [18].We monitored the decrease in the intensity of the (002) plane with the increase in the Mo-concentration, which might be explained by the fact that the surface energy of the (002) plane is the lowest in the ZnO crystal. Optical Studies The transmission and reflectance spectra of pure and Mo-doped ZnO thin films were measured in the range 300-1000 nm.The films of pure ZnO exhibited the lowest optical transmittance in the visible region as compared to the doped films (Figure 2).The addition of molybdenum to ZnO films improves the transparency of the oxide thin films.In principle, when an additive is introduced into the oxide, its transparency is expected to decrease.The effect we observed may be due to the isovalent substitution of zinc with molybdenum in the wurtzite structure of zinc oxide.The layers absorb in the UV region and begin to transmit at the beginning of the visible spectrum around 350 nm.The onset of transmission, also known as the optical edge, changes with the addition of molybdenum.The change is towards longer wavelengths with the addition of 1% molybdenum and a subsequent shift towards lower wavelengths as the molybdenum content increases to 2%.This shift affects the width of the forbidden zone, the values of which show the same trend.Data on film thickness taken from the optical profilometer measurement are presented in Table 1.Data on film thickness taken from the optical profilometer measurement are presented in Table 1.Additionally, using the formula where n is refractive index, and λ1 and λ2 are the maximal and minimal wavelengths in the visible spectrum, the thickness of the films was calculated, and the data are presented in Table 1 It is obvious that the data on the measured films' thickness correspond to the data on the calculated films' thickness.In regard to the transmission and reflection spectrophotometric data, the refractive index and extinction coefficient of ZnO-and Modoped ZnO thin films are estimated in a broad spectral range (300-800 nm).In order to solve a set of two nonlinear equations with two unknowns, which are the real and imaginary parts of the refractive index, we use a derivative approach on a wavelength-bywavelength basis.The numerical procedure is based on the so-called "trust-regiondogleg" algorithm that is specifically designed as a way to solve nonlinear equations and thus to calculate the optical constants of the film as a function of the wavelength [19,20]. To determine the absorption coefficient, we use the relation [21]: where T is the transmittance, R is the reflectance, and d is the film thickness.The optical band gap is calculated by applying the Tauc model procedure, with an accuracy ±0.002 eV.[22]: Additionally, using the formula where n is refractive index, and λ 1 and λ 2 are the maximal and minimal wavelengths in the visible spectrum, the thickness of the films was calculated, and the data are presented in Table 1. It is obvious that the data on the measured films' thickness correspond to the data on the calculated films' thickness.In regard to the transmission and reflection spectrophotometric data, the refractive index and extinction coefficient of ZnO-and Mo-doped ZnO thin films are estimated in a broad spectral range (300-800 nm).In order to solve a set of two nonlinear equations with two unknowns, which are the real and imaginary parts of the refractive index, we use a derivative approach on a wavelength-by-wavelength basis.The numerical procedure is based on the so-called "trust-region-dogleg" algorithm that is specifically designed as a way to solve nonlinear equations and thus to calculate the optical constants of the film as a function of the wavelength [19,20]. To determine the absorption coefficient, we use the relation [21]: where T is the transmittance, R is the reflectance, and d is the film thickness.The optical band gap is calculated by applying the Tauc model procedure, with an accuracy ±0.002 eV.[22]: where the incident photon energy is hν and the optical band gap is E g , B is the constant and n can be 1 2 , 3/2, 2 or 3.The mode of the interband transition, for example direct allowed, direct forbidden, indirect allowed and indirect forbidden transition, influences the n value.To calculate the band gaps of the films, we have used Tauc's plot by plotting αhν = f(hν) and by extrapolating the linear part of the absorption edge [1].The result shows that the Eg value decreases with the introduction of the dopant, which could be due to the new energy states between the valence and conduction bands.Additionally, the presence of any defects could be accepted as recombination centers.They form new levels in the band gap, thus lowering it [23].The slight increase in the band gap value with the increase in the dopant concentration originates from the Burstein-Moss effect.G. Chen et al. observed similar behaviors when magnetron co-sputtering Mo-doped ZnO thin films [16]. Morphological Studies The surface morphologies of the undoped and Mo-doped ZnO thin films investigated using scanning electron microscopy (SEM) are presented on Figure 3.The microstructure of the films consists of a number of grains, which are uniformly distributed throughout the surface.The surface morphology of the 1 mol% Mo-doped ZnO thin film shows small dense grains.When the concentration increases to 2%, the number of nuclei increases, and the nuclei grow over the entirety of the surface area of the substrate with uniform grains.The grain size of the pure ZnO thin films is approximately 30 nm.The images reveal that the grain size of the Mo-doped films increases to 60-80 nm compared to the pure ZnO thin films.This could be explained as a result of the migration ability of the atoms on the surface.Additionally, the morphology of the ZnO films depends on many factors, such as: the flow of the solution, substrate temperature, carrier gas, precursors, oxidant source, distance between nozzle and spray, etc. [18].When there is a fairly high flow of the carrier gas during the solid-phase reaction, the grains' sizes increase due to their high capacity for building aggregates.The authors A. Zak et al. identified a decrease in the grain size, which might be closely related to the lower flow of carrier gas [24].where the incident photon energy is hν and the optical band gap is Eg, B is the constant and n can be ½, 3/2, 2 or 3.The mode of the interband transition, for example direct allowed, direct forbidden, indirect allowed and indirect forbidden transition, influences the n value.To calculate the band gaps of the films, we have used Tauc's plot by plotting αhν = f(hν) and by extrapolating the linear part of the absorption edge [1].The result shows that the Eg value decreases with the introduction of the dopant, which could be due to the new energy states between the valence and conduction bands.Additionally, the presence of any defects could be accepted as recombination centers.They form new levels in the band gap, thus lowering it [23].The slight increase in the band gap value with the increase in the dopant concentration originates from the Burstein-Moss effect.G. Chen et al. observed similar behaviors when magnetron co-sputtering Mo-doped ZnO thin films [16]. Morphological Studies The surface morphologies of the undoped and Mo-doped ZnO thin films investigated using scanning electron microscopy (SEM) are presented on Figure 3.The microstructure of the films consists of a number of grains, which are uniformly distributed throughout the surface.The surface morphology of the 1 mol % Mo-doped ZnO thin film shows small dense grains.When the concentration increases to 2%, the number of nuclei increases, and the nuclei grow over the entirety of the surface area of the substrate with uniform grains.The grain size of the pure ZnO thin films is approximately 30 nm.The images reveal that the grain size of the Mo-doped films increases to 60-80 nm compared to the pure ZnO thin films.This could be explained as a result of the migration ability of the atoms on the surface.Additionally, the morphology of the ZnO films depends on many factors, such as: the flow of the solution, substrate temperature, carrier gas, precursors, oxidant source, distance between nozzle and spray, etc. [18].When there is a fairly high flow of the carrier gas during the solid-phase reaction, the grains' sizes increase due to their high capacity for building aggregates.The authors A. Zak et al. identified a decrease in the grain size, which might be closely related to the lower flow of carrier gas [24]. (a) Figure 4 shows AFM 2D and 3D images of pure ZnO and doped ZnO thin films.The scanning areas are 20 µm × 20 µm and 1 µm × 1 µm.The films have uniform granular structures and the grains are of nanometer size.The particle size of the Mo-ZnO films is found to increase with the increase in the concentration of molybdenum, which corresponds to the SEM results.The average size of the grains is between 60 and 80 nm, calculated for a 1 µm × 1 µm film. The evaluated root mean square (RMS) roughnesses of the undoped and 1% and 2% Mo-doped ZnO thin films, assessed by atomic force microscopy (AFM), are 6.12, 23.54, and 23.83 nm, respectively.This study shows that the surface roughness is higher for Mo-doped thin films compared to pure films.The evaluated root mean square (RMS) roughnesses of the undoped and 1% and 2 Mo-doped ZnO thin films, assessed by atomic force microscopy (AFM), are 6.12, 23.5 and 23.83 nm, respectively.This study shows that the surface roughness is higher for M doped thin films compared to pure films. Conclusions The effects of Mo on the structure, optical properties and morphology of ZnO th films prepared on glass and silicon substrates were investigated.XRD patterns indica the presence of polycrystalline films with a wurtzite structure.The intensity of t diffraction peaks alternates, indicating modifications in the structures of doped films.SE analysis showed that the microstructure of the films consists of dense grains uniform Conclusions The effects of Mo on the structure, optical properties and morphology of ZnO thin films prepared on glass and silicon substrates were investigated.XRD patterns indicate the presence of polycrystalline films with a wurtzite structure.The intensity of the diffraction peaks alternates, indicating modifications in the structures of doped films.SEM analysis showed that the microstructure of the films consists of dense grains uniformly distributed throughout the surface.The AFM studies reveal that the film' surface roughness increases when adding Mo as a dopant, and with the concentration of Mo as well.UV-VIS spectroscopy data show an increase in transmittance for Mo-doped ZnO thin films in the visible region. Figure 4 Figure4shows AFM 2D and 3D images of pure ZnO and doped ZnO thin films.The scanning areas are 20 µm × 20 µm and 1 µm × 1 µm.The films have uniform granular structures and the grains are of nanometer size.The particle size of the Mo-ZnO films is found to increase with the increase in the concentration of molybdenum, which corresponds to the SEM results.The average size of the grains is between 60 and 80 nm, calculated for a 1 µm × 1 µm film. Table 1 . The influence of the Mo content on the optical properties. Table 1 . The influence of the Mo content on the optical properties.
4,734.8
2024-05-01T00:00:00.000
[ "Materials Science" ]
An Improved Denoising Algorithm for Highly Corrupted Color Videos Using FPGA Based Impulse Noise Detection and Correction Techniques Objective: This research focuses on design and implementation of FPGA hardware architecture based image de-noising algorithm with automatic detection and correction of impulse noise. With the proposed hardware, image or video camera can be interfaced with the implementing hardware for de-noising of salt and pepper noise called impulse noise. Methods/Analysis: The algorithms proposed in this research mainly work on images to identify the impulse noise ­affected­pixel­and­correct­only­the­corrupted­pixel­instead­of­uncorrupted.­Also­the­novelty­method­modifies­the­existed features of various de-noising techniques using real time, low power FPGA architecture for the noise detection and correction. Findings:­The­proposed­research­is­in­two­stages­namely­modified­boundary­discriminative­noise­detection­and recursive­median­filter­based­correction­technique.­These­two­stages­are­tested­for­various­noise­densities­and­delivered­better­de-noising­factor­than­the­existing­algorithms.­The­simulation­results­of­the­implemented­algorithm­prove ­efficiency­of­98%­noise­reduction­factor­for­90%­corrupted­image­or­video.­ An Improved De-noising Algorithm for Highly Corrupted Color Videos Using FPGA Based Impulse Noise Detection and Correction Techniques L. S. Usharani1*, P. Thiruvalar Selvan2 and G. Jagajothi1 1Periyar Maniammai University, Vallam, Thanjavur, India<EMAIL_ADDRESS>2TRP Engineering College (SRM Groups), Tiruchirappalli, India<EMAIL_ADDRESS> Introduction Digital Image processing is a broad area with remarkable applications including our daily life such as image authentication, broadcastings, medicines, automatic control equipments, and military surveillance 1 .In the all above applications, the captured images by camera are affected during image acquisition and image transmission by different kinds of noises.It may severely degrade the quality of image and cause information loss in image 2,3 .According to the different types of disturbances or errors in image acquisition and image transmission process, the image noise may be classified as non-isotropic noise, Periodic noise, Shot noise, Speckle noise, Amplifier noise and Salt-and-pepper noise called impulse noise 4 .In recent decades there are different processes are available to remove or reduce the noise from digital images.For that, several linear filters have been utilized for de-noising the images 5 .But due to the image blurring, the non linear filters have been exploited widely because of the improved filtering performance. Noise Models and Filtering Methods In the noise model the impulse noise have the highest priority to be removed from an image before applying the image in to image processing tools 6 .The impulse noise is also called as the salt and pepper noise, spike noise and shot noise that mainly caused by malfunctioning pixel elements and faulty memory locations or by the timing errors at the time of digitization process 7 .In this research we mainly focus on removing impulse noise from the real time video images. Filtering the impulse noise is a primary process used to achieve noise estimation, noise reduction, re-sampling and interpolation 11 .All the filtering methods are of two stages.The first stage is to detect the noise and the second stage is to eliminate or remove the noise from digital image.In the first stage the pixels based on the intensity values are divided roughly into two types 12 .Noisy pixels and Noise free pixels are identified and the noisy pixels are changed to non noise pixel by the filter application.Here Y is the noisy pixel and X is the original value. Related Works Several filtering techniques have been proposed to remove the noises from the digital images.All the proposed filtering methods may include mean filter, average filter, median filter and etc 13 .All existing survey of various noise filtering techniques implies that non linear filter have high act on removing the noises than the linear filters.In the non linear filters, the median filter is most popular due to its simplicity in implementation and efficiency in noise suppression.The general median filter applies median operation to all the pixels unconditionally without checking whether the pixel is corrupted or uncorrupted.This unconditional operation is done by standard Median Filter (MF) and center weighted median filter 16 .Noise reduction methods were proposed by applying fine threshold method to each pixels present in the corrupted image using Center Weighted Median Filter (CWMF).In 14 proposed adaptive centers weighted median filter to remove salt and pepper noise.In 15 suggested various new non linear filtering techniques for image de-noising.Switching Median Filter (SMF), Adaptive Switching Median Filter (ASMF), and decision based algorithm, adaptive decision based robust statistics estimation filter are the popular noise reduction filters suggested by various researchers by including noise detection and correction methods.Hence there is a strong need of a novelty method to detect and correct the corrupted pixels from digital images. Novelty Method for Impulse Noise Detection This proposed research mainly focuses on detecting the noisy pixels from pixel set of an image.To detect the noisy pixels, the proposed method uses pixel intensity calculation technique and it includes three kinds of pixel set formation such as minimum intensity pixel set, medium intensity pixel set and maximum intensity pixel set.The salt and pepper noise has intensity level of 255 and 0 respectively.We can part intensities from 0 to 255 as minimum, medium, and maximum levels 9 .The noise detection method consists of two different rules.The first rule is applied to find intensity cluster set.The second rule is applied followed by the first rule if and only if the processing pixel set lies in the minimum or maximum intensity level of cluster set.In order to find the intensity cluster set, let us consider an image window of 5 * 5 as below, In the above 5 * 5 window, the processing pixel is 212.Now the pixel 212 is applied to rule 1.In the rule 1, 5 * 5 window is sorted in the form of ascending order (P 0 ) to find median.The median in P 0 is 86.Then P int (intensity difference of each pixel) is calculated.Thus, the 3 intensity set are found out as depicted below.As already said, the second rule is applied if and only if the processing pixel lies on minimum or maximum intensity set level after the pixel intensity cluster calculation process.In our consideration the processing pixel 212 lies on maximum intensity level.Therefore the rule 2 is applied further.Since the processing pixel is in maximum intensity cluster, the window size can be reduced to 3 * 3. Now the rule 2 processes starts with finding sorted pixel (P 0/new ) set and finding vector intensity set.Further minimum, medium and maximum intensities of 3 * 3 window is calculated.After rule 2, the processing pixel 212 lies on medium intensity cluster.Hence it is identified that the processing pixel is in uncorrupted pixel set.So there is no need of further filtering process. The typical block diagram of the proposed research is shown in the Figure 1. Impulse Noise Reduction/ Correction Method Let us consider an image window of 3 * 3 as below and now the processing pixel is 255.Now the pixel 255 is applied to rule 1 and rule 2. Finally it is also identified that 255 lies on maximum intensity cluster and it is identified as the salt noise.Thus it should be filtered in order to reduced noise as described below, The same process is repeated for all pixels in the image Noise reduction method The above noise reduction process starts with finding one dimensional array (P 1D ) set and finding modified pixel set (P mod ) by removing 0 and 255 in the one dimensional array (P 1D ).In the above work the median is 78 and it may lie on medium intensity pixel.Hence it is replaces the processing pixel 255 into 78.Thus the noise is detected and corrected successfully. Data Flow Diagram of the Research The data flow diagram shown in Figure 2 consists of three units.The first unit is the preprocessing unit, where all the video to frame conversion is done.The second unit is the impulse noise detection method 10 .The third unit concentrates on correcting the impulse noise from the image and delivering the final output video. FPGA Hardware Implementation Details Median filter is designed in FPGA platform using the following Figures x, y.It uses a mask that is applied to each pixel present in the input image 13 .The median value is found by placing the pixel values in ascending order and selecting the midpoint value.The implemented FPGA hardware is depicted in the Figure 3 and 4. The proposed noise detector noise detecting and filtering algorithm and its FPGA hardware architecture is designed and verified on Spartan-3E family device using Model_sim 6.1 and Xilinx 12.1.Our proposed de-noising system is evaluated with Peak Signal to Noise ratio.Here the real time video capturing, noise adding and noise filtering is done using the hardware implemented in Figure 4.The noise density is varied and the result of proposed technique is compared and proved its originality than others as shown in Figure 4 and 5.The power consumption and performance analysis of the research is shown in Table 1 and 2 Simulation Results The performance of our proposed impulse noise detection and correction method is tested with some color images are listed in the Figure 5 and 6.In the experimental simulation, the color image will be added with impulse noise (salt and pepper noise).Here the pixel value 255 represents the salt noise and pixel value 0 represents the pepper noise.For Noise correction capability, the addition of salt and pepper noise is varied from 10% to maximum (up to 90%).For each variation, our proposed algorithm has produced the good restoration level. The peak signal to noise ratio and mean square error value are calculated with various noise variances and found greater restoration performances and illustrated in Table 3.The theory of PSNR is to make relationship between original images and resulted images and is defined as follows, Conclusion As Salt and pepper noise has the highest priority to be removed from video and images, the de-noising algorithms that are being proposed should be effective in removing ratio than existing.As per the requirement the proposed FPGA hardware algorithm enhances and restores the image with high quality without salt and pepper noise.The peak signal to noise ratio and mean square error value are calculated with various noise variance and found greater restoration performances. The noisy pixels in the image show different intensity values instead of true pixel values. Figure 1 . Figure 1.Block diagram for noise detection and correction. . Figure 4 . Figure 4. Median finding pixel comparator design for FPGA architecture. -> pixel of the original image y ij -> pixel of the restored image MSE -> mean square error, PSNR -> Peak Signal to Noise Ratio. Figure 5 . Figure 5. Impulse noise removing capability of the proposed system. Figure 6 . Figure 6.Real video Impulse noise removing capability of the proposed system. 9,10e are two promising values exists in the impulse noise.They are 'a' and 'b' .The probability of each is below 0.28.If the numbers 'a' and 'b' are greater than 0.2 the noise may swamp out the image.The typical value for salt noise of an 8 bit image is 255 and pepper noise is 0. The main reasons for Salt and Pepper Noise are as follows9,10. Table 2 . Performance analysis of hardware utilization Table 1 . Comparison of power consumptions of spartan-3 family FPGA Family Device specifications Power consumption An Improved De-noising Algorithm for Highly Corrupted Color Videos Using FPGA Based Impulse Noise Detection and Correction Techniques Table 3 . Peak signal to noise ratio
2,571.8
2015-04-01T00:00:00.000
[ "Computer Science" ]
Progressive Generation of Long Text with Pretrained Language Models Large-scale language models (LMs) pretrained on massive corpora of text, such as GPT-2, are powerful open-domain text generators. However, as our systematic examination reveals, it is still challenging for such models to generate coherent long passages of text (e.g., 1000 tokens), especially when the models are fine-tuned to the target domain on a small corpus. Previous planning-then-generation methods also fall short of producing such long text in various domains. To overcome the limitations, we propose a simple but effective method of generating text in a progressive manner, inspired by generating images from low to high resolution. Our method first produces domain-specific content keywords and then progressively refines them into complete passages in multiple stages. The simple design allows our approach to take advantage of pretrained LMs at each stage and effectively adapt to any target domain given only a small set of examples. We conduct a comprehensive empirical study with a broad set of evaluation metrics, and show that our approach significantly improves upon the fine-tuned large LMs and various planning-then-generation methods in terms of quality and sample efficiency. Human evaluation also validates that our model generations are more coherent. Introduction Generating coherent long text (e.g., 1000s of tokens) is useful in myriad applications of creating reports, essays, and other long-form content. Yet the problem is particularly challenging as it demands models to capture global context, plan content, and produce local words in a consistent manner. Prior studies on "long" text generation have typically limited to outputs of 50-200 tokens (Shen et al., 2019;Bosselut et al., 2018;Zhao et al., 2020). 1 Code available at https://github.com/ tanyuqian/progressive-generation Figure 1: Results of large-scale LMs (GPT-2 and BART) fine-tuned on 10K stories. Coherence of text is evaluated by BERT next sentence prediction (NSP) score, where x-axis is the position of the evaluated sentences in the passage. There is a significant gap in coherence between text by human and text by large-scale LMs. Our proposed ProGen instead generates more coherent samples close to human text. Recent large-scale pretrained language models (LMs), such as GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020), emerged as an impressive open-ended text generator capable of producing surprisingly fluent text. The massive LMs are typically pretrained on large corpora of generic text once, and then fine-tuned with small domainspecific data. The latest work has mostly focused on the regime of relatively short text with low hundreds of tokens. For example, Holtzman et al. (2020); See et al. (2019); Hua and Wang (2020) studied GPT-2 and BART generations with a maximum length ranging from 150 to 350 tokens. In this work, we study the problem of generating coherent, much longer passages of text (e.g., 1000 tokens). GPT-3 (Brown et al., 2020) was reported to produce long essays, yet the results seem to need extensive human curations (e.g., MarketMuse; Gardian), and the system is not publicly available to adapt to arbitrary desired domains. In this work, we examine fine-tuning of largescale LMs for domain-specific generation of extra-long text. We find that samples produced by GPT-2 fine-tuned on small domain-specific corpora exhibit various imperfections, including excessive repetitiveness and incoherence between sentences far apart. Figure 1 measures the coherence of text generated by the fine-tuned GPT-2 w.r.t the BERT next sentence prediction (Devlin et al., 2019) score. As the figure shows, GPT-2 models (regardless of the model size) exhibit a significant gap in the score compared with human text, hence falling short in generating coherent text. We hypothesize that the problem is mainly caused by the sequential generation order of the LMs, which makes global content planning of the passage difficult, especially when the generated text is long and contains thousands of words. One could potentially adopt the recent planning-thengeneration or non-monotonic methods (Sec 2), yet those methods either require specialized neural architectures that need costly retraining for each domain (Gu et al., 2019;Fan et al., 2019), or rely on dedicated intermediate content plans (e.g., summaries, SRL labels) (Fan et al., 2019;Yao et al., 2019) with limited flexibility and producing sub-optimal results as shown in our experiments. To overcome the limitations, we introduce a new method for Progressive Generation of Text (Pro-Gen). We observe that generation of some words (e.g., stop words) does not require many contexts, while other words are decisive and have long-term impact on the whole content of the passage. Motivated by this observation, our approach first produces a sequence of most informative words, then progressively refines the sequence by adding finergrained details in multiple stages, until completing a full passage. The generation at each stage is conditioning on the output of the preceding stage which provides anchors and steers the current generation ( Figure 2). The intermediate words produced at each stage are defined based on a simple TF-IDF informativeness metric. The approach enjoys several core advantages: (1) Although the progressive approach implements a conceptually non-monotonic generation process, generation at each stage can still be performed in a left-to-right manner and thus is directly compatible with the powerful pretrained monotonic LMs. The LMs at different stages are easily fine-tuned to accommodate a target domain using only small, independently constructed data. Intuitively, each LM is addressing a sub-task of mapping a sequence to a finer-resolution one, which is much simpler than the overall task of mapping from conditions to full passages of text. In this work, we use BART (Lewis et al., 2020) for generation at each stage, though one can also plug in other off-the-shelf LMs. As seen from Figure 1, ProGen can generate more much coherent text compared with GPT-2 and nearly match human text in terms of the BERT-NSP score; (2) In contrast to the typical 2-stage planning-then-generation in prior work, the simple progressive strategy offers added flexibility for an arbitrary number of intermediate stages, yielding improved results; (3) The training data for each stage is extracted from domain corpus using the simple TF-IDF metric, without need of additional resources (e.g., pretrained summarization models) as in prior work, making the method broadly applicable to various domains and languages. We conduct extensive empirical studies on the CNN News (Hermann et al., 2015) and Writing-Prompts (Fan et al., 2018) corpora, evaluating various systems by a wide-range of automatic metrics as well as human judgement. Results show that Pro-Gen achieves strongly improved performance by decomposing the generation into more progressive stages. Our method produces diverse text passages of higher quality and coherence than a broad set of models, including fine-tuned GPT-2, BART, and other various planning-then-generation strategies. Related Work Content planning in generation. The idea of separate content planning and surface realization has been studied in early text generation systems (Reiter and Dale, 1997). Recent neural approaches have also adopted similar planning-thengeneration strategies for data-to-text (Moryossef et al., 2019;Puduppully et al., 2019), storytelling (Fan et al., 2019;Yao et al., 2019;, machine translation (Ford et al., 2018), and others (Hua and Wang, 2019;Yao et al., 2017). These models often involve customized architectures incompatible with the existing large LMs. Scaling those models for long text generation thus can require expensive training, which restricts systematic studies. On the other hand, it is possible to adopt some of the content planning strategies (e.g., summaries or SRL sequences as the plans (Fan et al., 2019)), and repurpose pretrained LMs for generation in each stage. However, these strategies with dedicated intermediate plans and a pre-fixed number (typically 2) of stages can have limited flexibility, leading to sub-optimal results as shown in our empirical study. Besides, creating training data for planning requires additional resources (e.g., pretrained summarization models or SRL models) which are not always available (e.g., in certain domains or for low-resource languages). In contrast, we propose a simple way for designing the intermediate stages based on word informativeness, which can flexibly increase the number of stages for improved results, and easily create training data for all stages without additional models. Non-monotonic generation and refinement. Another relevant line of research is non-monotonic generation Gu et al., 2019;, infilling (Zhu et al., 2019;Shen et al., 2020;Qin et al., 2020), or refinement (Lee et al., 2018;Novak et al., 2016;Mansimov et al., 2019;Kasai et al., 2020) that differs from the restricted left-toright generation in conventional LMs. Again, those approaches largely depend on specialized architectures and inference, making them difficult to be integrated with the powerful pretrained LMs. The prior studies have focused on generating short text. Our proposed coarse-to-fine progressive generation conceptually presents a non-monotonic process built upon the pretrained monotonic LMs, which permits fast adaptation to any target domain and generation of much longer text. Long text generation. Previous work has made attempts to generate text of up to two or three hundred tokens. Those methods often adopt the similar idea of planning-then-generation as above (Shen et al., 2019;Zhao et al., 2020;Bosselut et al., 2018;See et al., 2019;Hua and Wang, 2020;Rashkin et al., 2020). Another line of work instead focuses on extending the transformer architecture (Vaswani et al., 2017) to model longer text sequences (e.g., Dai et al., 2019;Choromanski et al., 2021, etc). For example, Liu et al. (2018) used a hybrid retrieval-generation architecture for producing long summaries; Dai et al. (2019) showed long text samples qualitatively. Our work systematically examines the pretrained LMs in generating long domain-specific text, and proposes a new approach that empowers pretrained LMs for producing samples of significantly higherquality. Progressive Generation of Text One of the main challenges in generating long coherent passages is modeling long-range dependencies across the entire sequences (e.g., 1000 tokens). We propose a progressive generation approach that is conceptually simple yet effective. Intuitively, progressive generation divides the complex problem of generating the full passage into a series of much easier steps of generating coarser-grained intermediate sequences. Contrary to generating everything from left to right from scratch, our progressive generation allows the model to first plan globally and then shift attention to increasingly finer details, which results in more coherent text. Figure 2 illustrates the generation process. Generation Process Let y := [y 1 , y 2 , . . . , y T ] be the output text, where each y i is a token of language (a word or a subword). The output sequences are generated either conditionally on any other information x (e.g., generations of a story given a prompt), or unconditionally (in which case we assume x ≡ ∅ while keeping the same notation). Instead of generating the full passage y directly, we propose to add multiple intermediate stages: x → c 1 → c 2 · · · → c K → y, where for each stage k ∈ {1, . . . , K}, c k is an intermediate sequence containing information of the passage at certain granularity. For instance, at the first stage, c 1 can be seen as a highest-level content plan consisting of the most informative tokens such as key entities. Then, based on the plan, we gradually refine them into subsequent c k , each of which contains finer-grained information than that of the preceding stage. At the final stage, we refine c K into the full passage by adding the least informative words (e.g., stop words). The generation process corresponds to a decomposition of the conditional probability as: As the above intuition, c k at early stages as the high-level content plans should contain informative or important words, to serve as skeletons for subsequent enrichment. We next concretely define the order of generation, namely, which words should each stage generates. Specifically, we propose a simple method shouted my head officer from the jeep . The dog was running circles around our vehicle , barking at the people inside . The officer tapped my shoulder and pointed to the yellow , skinny animal circling our jeep . " But sir.. , " I managed to spit out before he took both his hands and pushed me out of the vehicle . I went tumbling out , and landed on the rough sandy ground . I stood up adjusting the gun hanging from my shoulder and proceeded to walk towards the canine . The dog stopped its barking , and shifted its black eyes to me . " Come here little pup . Hey come here , I ' m not going to hurt ya , " I said trying to coax it nearer to me . Actually , I didn ' t know if I was going to hurt the little mutt or not yet . that constructs a vocabulary V k for each stage k, based on the importance of words in the target domain. Each particular stage k only produces tokens belonging to its vocabulary V k . By the progressive nature of the generation process, we have That is, V 1 contains the smallest core set of words in the domain, and the vocabularies gradually expand at later stages until arriving the full vocabulary V. Note that vocabularies in later stages are supersets of those in earlier stages. This allows the later stages to remedy and polish potential mistakes made in earlier stages when necessary. We discuss the construction of the vocabularies in the below. Stage-wise vocabularies based on word importance. Given a text corpus D of the target domain with the full vocabulary V, we define the importance scores of words in V based on the TF-IDF metric. We then rank all the words and assign the top V k words to the intermediate vocabulary V k . Here V k is a hyper-parameter controlling the size of V k . More concretely, for each word w ∈ V, we first compute its standard TF-IDF score (Salton and McGill, 1986) in each document d ∈ D, which essentially measures how important w is to d. The importance of the word w in the domain is then defined as the average TF-IDF score across all documents containing w: where TF_IDF(w, d) is the TF-IDF score of word w in document d; and DF(w, D) is the document Output: Fine-tuned LMs for generation at all stages in a progressive manner frequency, i.e., the number of documents in the corpus that contain the word w. Pretrained language models as building blocks. Compared to many of the previous planning-thengeneration and non-monotonic generation methods, one of the key advantages of our progressive generation design is the direct compatibility with the powerful pretrained LMs that perform left-to-right generation. Specifically, although our approach implements a non-monotonic generation process that produces importance words first, we can generate intermediate sequences c k at each stage still in a left-to-right manner. Thus, we can plug pretrained LM, such as GPT-2 or BART, into each stage to carry out the generation. As described more in section 3.2, for each stage k, we can conveniently construct stage-specific training data from the domain corpus D using the stage-wise vocabulary V k , and fine-tune the stage-k LM in order to generate intermediate sequences at the stage that are pertaining to the target domain. One can add masks on the pretrained LM's to-ken distributions to ensure the stage-k LM only produces tokens belonging to V k . In practice, we found it is not necessary, as the pretrained LM can usually quickly learns the pattern through finetuning and generate appropriate tokens during inference. In our experiments we use BART for all stages, since BART is an encoder-decoder model which can conveniently take as inputs the resulting sequence from the preceding stage and generate new. (For the first stage in an unconditional generation task, we simply set x = ∅.) We note that GPT-2, and other relevant pretraiened LMs, can indeed also be used as a conditional generator (Radford et al., 2019;Liu et al., 2018) and thus be plugged into any of stages. Training Our approach permits straightforward training/finetuning of the (pretrained) LMs at different stages given the domain corpus D. In particular, we can easily construct independent training data for each stage, and train all LMs in parallel. Note that no additional resources such as pretrained summarization or semantic role labeling models are requested as in previous work, making our approach directly applicable to a potentially broader set of domains and languages. We plan to explore the use of our method in multi-lingual setting in the future. More concretely, for each stage k, we use the stage vocabularies V k−1 and V k to filter all relevant tokens in the documents as training data. That is, given a document, we extract the subsequence c * k−1 of all tokens from the document that are belonging to V k−1 , and similarly extract sub-sequence c * k belonging to V k . The c * k−1 and c * k are then used as the input and the ground-truth output, respectively, for training the LM at stage k with maximum likelihood learning. Therefore, given the stage-wise vocabularies {V k }, we can automatically extract training data from the domain corpus D for different stages, and train the LMs separately. In the multi-stage generation, the intermediate sequences are not natural language. Yet we found that fine-tuning pretrained LMs (such as BART and GPT-2) to generate the intermediate sequences is indeed very efficient in terms of data and computation. We tried training other models such as small sequence-to-sequence models and n-gram models from scratch, which we found is much harder, requiring more data, or yielding inferior performance. This again highlights the importance of using pretrained LMs, as enabled by our simple method design. Stage-level exposure bias and data noising. In the above training process, the outputs of each LM are conditioning on the ground-truth input sequences extracted from the real corpus. In contrast, at generation time, the LM takes as inputs the imperfect sequences produced at the previous stage, which can result in new mistakes in the outputs since the LM has never be exposed to noisy inputs during training. Thus, the discrepancy between training and generation can lead to mistakes in generation accumulating through the stages. The phenomenon resembles the exposure bias issue (Ranzato et al., 2016) of sequential generation models at token level, where the model is trained to predict the next token given the previous ground-truth tokens, while at generation time tokens generated by the model itself are instead used to make the next prediction. To alleviate the issue and increase the robustness of each intermediate LM, we draw on the rich literature of addressing token-level exposure bias (Xie et al., 2017;Tan et al., 2019). Specifically, during training, we inject noise into the ground-truth inputs at each stage by randomly picking an n-gram (n ∈ {1, 2, 3, 4}) and replacing it with another randomly sampled n-gram. The data noising encourages the LMs to learn to recover from the mistakes in inputs, leading to a more robust system during generation. Setup Domains. We evaluate on two text generation domains including: (1) CNN News (Hermann et al., 2015) for unconditional generation. (2) Writing-Prompts (Fan et al., 2018) for conditional story generation. The task is to generate a story given a prompt. The two datasets are chosen since they both contain long documents, with CNN's average and maximum length being 512 and 926, and Writ-ingPrompts's being 437 and 942, respectively. To demonstrate the data efficiency of our approaches adapting to target domains, we sample 1,000 documents in each dataset for training. Model configs. We use BARTs for all stages of generation. Due to computation limitations, we experiment models with 2, 3, 4-stages generations. In our 2-stage model, our first stage covers about 25% of all content; in the 3-stage model, the first and second stages cover 15% and 25% of all content, respectively; and in the 4-stage model, our first three stages cover 15%, 20%, 25% of all content. For model training, we follow the same protocol as (See et al., 2019) to fine-tune all pretrained models until convergence. To combat exposure bias, we add noise to the training data as described in Sec 3.2, with the probability of replacing 1,2,3,4grams 0.1/0.05/0.025/0.0125. In the generation phase, we use top-p decoding (Holtzman et al., 2020) with p = 0.95 to generate 1024 tokens at maximum. Experiments were conducted with RTX6000 GPUs. It took around 4 hours for model fine-tuning and generation with a single GPU. Comparison methods. We compare with a wide range of baselines, categorized into two groups: (1) The large pretrained LMs including BART (Lewis et al., 2020) and GPT-2 in both small and large sizes (Radford et al., 2019). The LMs generate text in a standard left-to-right manner; (2) Progressive generation with various strategies adopted in the prior planning-then-generation work. Same as our proposed method, each stage adapts a pretrained BART for generation. Specifically, Summary first generates a short summary text as the content plan and conditioning on the summary produces the full passage of text (Fan et al., 2019). For training, summaries are obtained using the state-of-the-art pretrained CNN news summarization model based on BART; Keyword first generates a series of keywords, based on which the full text is generated in the next stage. Following (Yao et al., 2019), the keywords are extracted with the RAKE algorithm (Rose et al., 2010) for training; SRL follows the recent work (Fan et al., 2019) by first generating a sequence of predicates and arguments and then producing the full text conditionally. The same semantic role labeling tool as in the prior work is used here to create training data. SRL+NER and SRL+Coref further augment the SRL method by an additional stage of generating entity anonymized text conditioning on the predicates sequence prior to the final stage (Fan et al., 2019). SRL+NER uses an NER model to mask all entities, while SRL+Coref applies coreference resolution to mask all clusters of mentions. We use the same NER and coreference tools as in (Fan et al., 2019). Finally, as a reference, we also present the results of Human-written text (i.e., the text in the dev set). Evaluation Metrics To evaluate the generation quality for the domainspecific open-ended generation as studied here, we primarily measure the "closeness" between two sets of text, one generated by the model and the other the real text from the target domain. We evaluate with a broad array of automatic metrics, including lexical-based quality metrics and semanticbased quality metrics. We also evaluate the generation diversity. MS-Jaccard (MSJ) is a lexical-based metric (Montahaei et al., 2019), where MSJ-n measures the similarity of n-grams frequencies between two sets of text with Jaccard index. TF-IDF Distance (TID) is defined as the distance between the average TF-IDF features of two text sets. We use it as an additional lexical-based quality measure. Fréchet BERT Distance (FBD) is a semanticbased metric (Montahaei et al., 2019) that measures the Fréchet Distance in the BERT feature space between the generated and real text. By using the BERT features from shallow (S), medium (M), and deep (D) layers, we can compute FBD-S/M/D, respectively. Backward BLEU (B-BLEU) is a diversity metric (Shi et al., 2018) measuring how well the generated text covers n-grams occurred in the test set. Harmonic BLEU (HA-BLEU) (Shi et al., 2018) is an aggregated quality and diversity metric that incorporates both the standard BLEU (i.e., precision) and the Backward BLEU (i.e., recall). Figures 3 and 4 show the results of the various systems on the news and story domains, respectively, measured with different metrics against test set. We give more complete results in the appendix. We can see that our progressive generation approach consistently outperforms the standard, single-stage LMs (GPT2-Small, GPT2-Large and BART) by a large margin on almost all metrics in both domains. Further, by increasing the number of progression stages, our method steadily achieves even stronger performance. This highlights the benefits of the flexible progressive generation strategy. Results The various models using pretrained LMs with previous planning-then-generation strategies show mixed results across the different metrics. For example, Summary achieves strong performance in terms of the semantic-based quality metric FBD-D (partially because the summaries are closer to the real text in the BERT feature space), but significantly falls behind other models in terms of diversity (B-BLEU4) and other quality metrics like MSJ and HA-BLEU. Similarly, the SRL-based methods give only mediocre results in terms of the semanticbased FBD-D. In contrast, our approach maintains a relatively consistent performance level. In particular, our 4-stage model, ProGen-4, is steadily among the best across all metrics, further validating the advantage of the proposed simple yet flexible multi-stage generation. These results also indicate the necessity of using a large diverse set of automatic metrics for a comprehensive evaluation, and motivate human studies for further assessment. Human Evaluation In our human study, we asked three university students who are proficient English speakers to evaluate the coherence and fluency of the generated text. To better assess the coherence of the long passages of text, we evaluate at both the passage level and the finer-grained sentence level. More concretely, for passage-level coherence, human raters assign a coherence score to each full-length text sample, on a 5-point Likert scale. For a more detailed assessment, we further evaluate sentencelevel coherence, where human raters label each sentence in the text passage with 0 or 1, indicating whether the particular sentence is coherent with the proceeding context in the passage. We then calculate the average percentage of coherent sentences in the generated text by each model. Human raters also evaluate the language quality for a fluency score on a 5-point Likert scale. We compare our method with the systems that show highest generation quality in automatic evaluation, including BART, GPT2-Small, and Summary. We evaluated 50 examples for each comparison model on the CNN domain. The Pearson correlation coefficient of human scores is 0.52, showing moderate inter-rater agreement. Table 1 shows the results. All systems receive close fluency scores. Our approach obtained significantly higher coherence scores at both passage and sentence levels. In particular, over 86% sentences in our model generations are considered as coherent with the context, improving over other models by at least 10 absolute percent. Ablation Study and Analysis Sample efficiency. We study how the progressive generation could improve the sample efficiency of large LMs fine-tuned to target domains. The intuition is that by focusing on the subsets of informative words, the early stages can more efficiently capture the domain-specific characteristics and then steer the subsequent refinement stages. Figure 5 shows the results where we report the FBD score averaged over FBD-S/M/D. We can see our approach can make more efficient use of the training data in learning to generate high quality samples. For example, with only 1K training examples, our method achieves comparable results with large LMs trained on 30K examples. Generation with gold plans. To investigate the importance of dividing the generation process into stages and what the stages learn separately, we add another set of text into our comparison. It is a 2stages model whose first stage is the ground truth (gold plan) while the second stage kept the same (a BART model), shown as GoldPlan in Table 3. Note that with gold plan, our model greatly decreases the gap with human text in terms of lexical (TID) and semantic (FBD-D) quality metrics. The results highlight the importance of plans in text generation. The intermediate plans act as an information bottleneck, and high-quality plans could lead to high-quality text generation. Effect of data noising. We study the ablation of data noising, to check whether the noising operation really helps reduce stage-wise exposure bias (Sec 3.2) as we expected. Table 2 shows the comparison between models with and without noise in training. The added noise generally brings performance improvement in terms of various metrics. Example generations. Table 4 shows an example of text generated via three stages. We can see our model first generates the key subject beckham and the team name liverpool in the very first stage, then adds more fine-grained details like acquisition, transfer in the second stage and finally expands the keywords into a full document describing Beckham's joining a new team. Conclusion We have proposed a new approach for domainspecific generation of long text passages in a progressive manner. Our method is simple and efficient by fine-tuning large-scale off-the-shelf language models. We conduct extensive experiments using a variety of metrics and human studies. We demonstrate that our method outperforms a wide range of large pretrained LMs with single-stage generation or prior planning-then-generation strategies, in terms of quality and coherence of the produced samples. The multi-stage generation also opens up new opportunities to enhance controllability of text generation, which we would love to explore in the future.
6,659.8
2021-06-01T00:00:00.000
[ "Computer Science" ]
Role of multiaxial stress state in the hydrogen-assisted rolling-contact fatigue in bearings for wind turbines Offshore wind turbines often involve important engineering challenges such as the improvement of hydrogen embrittlement resistance of the turbine bearings. These elements frequently suffer the so-called phenomenon of hydrogen-assisted rolling-contact fatigue (HA-RCF) as a consequence of the synergic action of the surrounding harsh environment (the lubricant) supplying hydrogen to the material and the cyclic multiaxial stress state caused by in-service mechanical loading. Thus the complex phenomenon could be classified as hydrogen-assisted rolling-contact multiaxial fatigue (HA-RC-MF). This paper analyses, from the mechanical and the chemical points of view, the so-called ball-on-rod test, widely used to evaluate the hydrogen embrittlement susceptibility of turbine bearings. Both the stress-strain states and the steady-state hydrogen concentration distribution are studied, so that a better elucidation can be obtained of the potential fracture places where the hydrogen could be more harmful and, consequently, where the turbine bearings could fail during their life in service. INTRODUCTION ffshore wind turbines often involve important engineering challenges [1], one of the most important being the improvement of hydrogen embrittlement resistance of the turbine bearings, a key issue in the evaluation of the structural integrity of such components.These elements are prone to suffer the so-called phenomenon of hydrogen-assisted rolling-contact fatigue (HA-RCF) [2,3] as a consequence of the synergic action of the surrounding harsh environment (the lubricant) supplying hydrogen to the material and the cyclic multiaxial stress state caused by inservice mechanical loading [2,3].Thus the complex phenomenon of progressive damage could be classified as hydrogenassisted rolling-contact multiaxial fatigue (HA-RC-MF).Three important aspects linked with bearing failures are being extensively researched: (i) rolling contact fatigue (RCF) [4][5][6][7], (ii) influence of carbide particles on fatigue life [8,9], and (iii) local microplastic strain accumulation via ratcheting [10][11][12].To achieve a better assessment of the structural integrity of such components, the analysis of hydrogen accumulation (revealing the prospective damage places) arises as a key issue.In previous studies [2,3], the widely used RC-MF ball-on-rod test [12][13][14][15] was simulated by the finite element method (FEM) in order to obtain the stress-strain state inside the bearings during life in-service.From these states, the hydrogen distribution corresponding to the steady-state in the radial direction of the bearing was obtained.This paper goes further in the study developed in previous research [2,3] including the analysis of the hydrogen distributions in hoop and axial directions, in order to obtain the potential fracture places where the hydrogen embrittlement phenomenon initiates. O NUMERICAL MODELING he study was divided into two uncoupled analysis.On one hand, the numerical simulation by means of a commercial finite element (FE) code was used for obtaining the stress and strain states after six revolutions of the bar.From the results of such an analysis, a simple estimation of the hydrogen accumulation for long time of exposure to hydrogenating environment was carried out allowing the estimation of the potential hydrogen damage places.The geometry analysed consist in a steel bar of length L= 6 mm and diameter d = 9.53 mm which rotates in contact with three equidistant steel balls of diameter D = 12.70 mm which apply a point load of F = 300 N over the bar surface as reflects the scheme of Fig. 1a.The complete 3D geometry can be simplified to a half just considering the symmetry plane r- shown in Fig. 1b and applying the corresponding boundary conditions as restricted displacement on the bar axial direction for all the nodes placed inside the symmetry plane.Thus, an important save of computing time is achieved optimizing the available resources.In addition, the geometry of the contacting balls can be also simplified considering the symmetry plane r-z of such components.Taking this into account, only a quarter of the whole geometry of the ball is modelled, as can be seen in Fig. 1b.The numerical modelling of the ball-on-rod test (six revolutions) was carried out considering the material constitutive law to be elastic perfectly plastic corresponding to a steel with the following material properties for both, rod and balls: Young modulus, E = 206 GPa, Poisson coefficient,= 0.3 and material yield stress  Y = 2065 MPa.The analysis was carried out considering the isotropic strain hardening of the material and updated Lagrange procedure.According to the Hertz theory considering only the elastic response of the components [16], a very localized effect can be expected in the contact zone between rod and balls.According to this, a ball pressuring a cylinder must undergo a contact pressure of 5.5 GPa with a elliptic contacting zone whose axis length are 160 m and 231 m respectively.A very refined mesh is required near the rod surface, whereas a coarser mesh was considered out of such a zone since the local effect of contact vanishes at the rod core.Thus, elements were homogenously distributed over a depth from the rod surface about 1 mm.This way, in this refined zone, the size of these elements is 43 x 52 x 280 m in the radial, circumferential and axial directions.Regarding the meshing of the balls, the same type of elements was used, assuming a refined zone at the contact zone with element sizes similar to those used for the cylindrical bar.A point load of 300 N was placed at each ball centre and was progressively applied during the first rotation of the rod.Taking this into account, diverse meshes with linear hexaedric element of 8 nodes were considered until the required convergence on results was achieved.The optimum mesh (Fig. 2) consists in 154000 elements: 130000 for meshing the rod and 24000 for meshing the three balls. From results of the mechanical simulation, a simple estimation of the behaviour against HA-RC-MF of the bar can be carried out considering that hydrogen diffusion proceeds from the bar surface to inner points as a function of the gradients of both hydrostatic stress () and hydrogen solubility (Ks) [17][18][19]: R being the universal gases constant, V H the partial volume of hydrogen, T the absolute temperature, C the hydrogen concentration and Ks the hydrogen solubility that is itself a one-to-one monotonic increasing function of equivalent plastic strain, as explained in detail elsewhere [17][18][19].In particular, a linear relationship between plastic strain and solubility in the form Ksp was considered [17][18][19]. After using the matter conservation law and applying the Gauss-Ostrogradsky, the following second-order partial differential equation of hydrogen diffusion is obtained: The equilibrium concentration of hydrogen for infinite time of exposure to harsh environment is the steady-state solution of the differential equation.It takes the form of a Maxwell-Boltzman distribution as follows: where C0 is the equilibrium hydrogen concentration for the material free of stress and strain.According to previous equations, hydrogen diffusion is driven by: (i) the negative gradient of hydrogen concentration (in the classical Fick´s sense); (ii) the positive gradient of hydrostatic stress; (iii) the positive gradient of hydrogen solubility, the latter is one-to-one related to the gradient of equivalent plastic strain so that the plastic strain gradient can be analysed instead of the hydrogen solubility gradient. MECHANICAL ANALYSIS: STRESS AND STRAIN umerical simulation allows the determination of the stress and strain state under cycling loading during the ballon-rod test.During rolling, the amplitude of the fatigue loading is progressively decreasing as the depth increases, reaching an almost uniform stress evolution near the rod core.So, only points placed close to the contact will undergo real fatigue.After the fatigue loading, a multiaxial stress state appears at the rod.Thus, Figs.2a, 3a and 4a shows the global view of the distribution of radial, hoop and axial stress respectively in the steel rod at the end of the sixth cycle, thereby after passing 17 contacting balls.For a more detailed analysis, the radial distribution of aforesaid variables are represented in Figs.2b, 3b and 4b for different values of the hoop coordinate  considering the following sections: (i) = 0º representing the contact plane between one of the balls and the rod, (ii) = 20º, (iii) = 40º, and finaly (vi) = 60º (corresponding to the symmetry plane between two contacting balls).Results shown in Figs.2a-4a, reveal a heavy multiaxial stress concentration localized at the contacting zones of each ball with the rolling rod.This effect progressively vanishes as the distance from the contact zone increases.Outside of the local affected zone, the stress state is homogenously distributed with a stress concentration ring located at the vicinity of the rod surface.The radial distributions shown in Figs.2b-4b reveal a huge compressive stress at the contact radius (= 0º) in the radial direction caused by the pressure applied by the ball.This stress concentration is more intense for the radial stress than for the other components of the stress tensor.In addition, the extension of the local effect of contacting balls spreads through a deeper zone (around 1.5 mm) for the radial stress (Fig. 2) than those corresponding to the hoop and axial stress (around 200 m).The extension and the maximum value of the compressive state is notably reduced at planes placed outside the contacting planes (i.e. for  0º) with slight variations for hoop coordinates higher than 20º.The distributions of the hoop and axial stresses show the same high reduction of the magnitude of the stress state but, in these cases, without significant changes in the extension of the affected zone.The distribution for the other radius in contact with the other balls = 120º and = 240º) is equivalent to that shown in Figs.2b-4b.Within the stress concentration zone, the values of the von Mises stress reach the material yield strength; it implies the appearance of plastic strains near the rod skin, as revealed in previous studies [2,3].As a consequence of the values of the stress state at the rod surface vicinity, plastic strains are distributed through such a zone.Fig. 5a shows the 3D view of the field of equivalent (cumulative) plastic strain after the six cycles of the test was completed and the radial distribution of such a variable is plotted in Fig. 5b.In the same way, Fig. 6a shows the 3D view of the field of hydrostatic stress after the sixth cycle of the test was completed and Fig. 6b shows the radial distribution of such a variable for diverse values of . According to this, plastic strains are distributed only over a plastic zone with ring shape spreading over 315 m from the periphery of the rod, as shown in the same radial distribution of plastic strain obtained for diverse radial planes (different  angles, cf.Fig. 5b) even for those closest to the contacting plane ( = 0º).This is due to the fact that plastic strains are only generated at the contacting plane.Outside of this zone, the von Mises stress is always lower than the material yield strength and consequently no plastic strain are generated in this region.Thus, the plastic strain remains the same in all sections of the rod.A progressive decreasing distribution is obtained with a small plateau close to the rod cylindrical surface of 50 m width.The first driving force for hydrogen diffusion, the inwards gradient of equivalent plastic strains, is negative and only affects the plastic strain ring near to the rod surface (Fig. 5).With regard to the second driving force for hydrogen diffusion, the gradient of hydrostatic stress, at the contact plane (= 0º), a distribution of compressive nature in radial direction of such a variable is obtained, it progressively decreasing with depth up to becoming null for a depth from the rod surface of about 1 mm (Fig. 6b). Outside the contact plane the hydrostatic stress distribution is notably reduced, so that, for angles  higher than 20º, it is almost independent of such an angle (as it happened with the distributions of radial, hoop and axial stresses).The typical profile consist of compressive stresses over 200 μm, tensile stresses for deeper points, and, in the case of radial coordinate lower than 4 mm, a null value for hydrostatic stress is obtained.For completing the analysis of hydrogen diffusion and accumulation on the rod during life in service, the information obtained from the estimation of hydrogen concentration in the radial direction [2,3] can be completed by a discussion of the implications of diffusion in the circumferential direction.To do so, the circumferential distribution of the variables affecting the hydrogen diffusion assisted by stress and strain is plotted in Fig. 5, considering diverse layers within the plastic zone.The circumferential distribution of hydrostatic stress shown in Fig. 7a reveals a local stress concentration in the vicinity of the contacting plane = 0º where the maximum hydrostatic stress is placed.Within a range of planes around 5º, the hydrostatic stress progressively decreases, becoming almost constant for other values of .This behaviour is observed for the distributions corresponding to depths around half size of the plastic zone (173 m approximately) with compressive stresses out of the affected zone. As the depth from the rod surface increases, the maximum value of the stress is suddenly decreased (a 90% for the depth around the size of the plastic zone x = 300 m, a 60% for the depth around half size of the plastic zone x = 173 m and a 25% just for a depth of 86 m).Beyond this depth the stress continuously decreases up to becoming almost null for deeper points.This way, in hoop direction hydrogen will be pumped out of the contact plane by means of a huge gradient of plastic strains. With regard to circumferential plastic strains, a minimum is placed close to the contact section (= 0º) where a slight local maximum appears, thereby creating a gradient of plastic strains.This gradient drives hydrogen out of the contact plane to planes with a higher .This effect is vanished with depth resulting almost null for depths from surface of 216 m and null for depths from the rod surface out of the plastic zone (x > 315 μm) observed in Fig. 5. Plastic strain slowly increases with the hoop coordinate  reaching a maximum value at = 45º.So, hydrogen will be pumped out suddenly from the contact plane and lately is dragged slowly for points placed at higher hoop coordinates (due to slower gradient far from the contact plane). - Finally, Fig. 8 shows the axial distribution of both hydrostatic stress and equivalent plastic strain for diverse values of depth from the rod surface (x).In the axial direction, a very located distribution of both hydrostatic stress and plastic strains near to contact plane is obtained.With regard to the hydrostatic stress distribution, the high compressive stress at the contact plane is progressively decreased as the distance from the contact plane (z) is increased, obtaining a null distribution of such a variable for z > 1.5 mm.As the depth from the rod surface increases, the hydrostatic stress at the contacting plane (z = 0 mm) progressively decreases and, consequently, the inwards gradient of hydrostatic stress in the axial direction is reduced as the depth from the rod surface is increased.Thus, hydrogen placed close to the contact between ball and bar is also pumped in the axial direction due to the positive inwards gradient of hydrostatic stress.This effect is progressively reduced with the depth x becoming almost negligible for depths x > 600 μm.Finally, the axial distribution of plastic strains appears through a narrow zone becoming null for axial distances z > 500 μm.As in the case of the hydrostatic stress distribution, the plastic strain at the contact plane (z = 0 mm) decreases with depth from the rod surface (x), and consequently the inwards gradient is progressively reduced as the variable x is increased becoming null for depths x > 600 m.However, the inwards gradient of equivalent plastic strains is negative, thereby; the hydrogen diffusion is not enhanced.This opposition is progressively annulled as the depth from rod surface is increased.So, two competitive factors are involved in the diffusion of hydrogen placed near to the contact between ball and bar.On one hand, the inwards gradient of hydrostatic stress enhances the diffusion of hydrogen out of the contact plane whereas; on the other hand, the inwards gradient of equivalent plastic strains is opposite, impeding the aforesaid diffusion.This effect is only noticeable near the contact zone and, therefore, the diffusion of hydrogen placed at deeper points (x > 600 m) can be considered only driven by the gradient of hydrogen concentration in axial direction. CHEMICAL ANALYSIS: HYDROGEN TRANSPORT BY DIFFUSION or assessing the HA-RC-MF behaviour of the rolling rod, it is interesting to analyse the long-time behaviour of the component under hydrogen exposure.To this end, the steady state distribution of hydrogen concentration through the rod radius was obtained (Fig. 9) using eq.( 3) and taking into account both hydrostatic stress and equivalent plastic strain.Plot is associated with infinite time (steady state solution from the mathematical point of view) or with thermodynamical equilibrium of the hydrogen-metal system (from the physical view point). F The discussion about HA-RC-MF is focused in the quantitative analysis of the hydrogen amount in radial, hoop and axial directions by applying the steady-state solution of the diffusion equation shown in eq. ( 3) to the distributions of the components of the stress tensor and plastic strain shown in Figs. 2 and 4 respectively.Thus, Fig. 9 shows the radial laws of hydrogen concentration along diverse radial planes considering the stress and strain states shown in Figs. 5 and 6, whereas Figs. 10 and 11 shows the distribution of hydrogen concentration in the hoop and axial directions respectively.According to these results, for long time of exposure to the hydrogenating environment, the hydrogen amount at the rod surface vicinity (within the stress and strain affected zone of the rod, i.e., for depths from the rod surface lower than 1 mm) is progressively increased with the circumferential distance to the contacting ball.Thus, for the plane where the ball is contacting the rod, a huge reduction of the hydrogen amount is observed due to the high compressive stresses produced by the contact pressure (Fig. 6).Consequently, hydrogen diffusion is promoted out of the contact affected zone due to the gradient of both driving forces for hydrogen diffusion: the inwards gradient of plastic strain and the inwards gradient of hydrostatic stress.This effect is also progressively vanished as the distance from the contact plane increases, it being noticeable for planes very close to the contact plane where an important reduction of the hydrogen concentration is also achieved.Nevertheless, for planes with circumferential coordinates higher than 10º from the contact plane, the hydrogen amount at surface is similar than that obtained for higher angle .The distributions of hydrogen for planes with circumferential coordinate higher than 10º just exhibit slight changes.An interesting issue is observed for these planes from the point of view of HA-RC-MF.At the rod surface vicinity over a depth of 200 m a significant reduction is observed due to the compressive stresses (Fig. 6) with a small plateau of 50 m. Hereafter the hydrogen concentration increases with depth, reaching the maximum value (C/C 0 = 1.15) of the distribution for a depth of 300 m and for deeper points softly decreases up to the rod core where the concentration associated with thermo-dynamical equilibrium of the material free of stress and strain (C0) is achieved.So, the potential place of damage would be placed through a zone extended between the balls for depth from surface of 300 m. The hoop distribution of hydrogen for long times of exposure to the hydrogenating environment is presented in Fig. 10, where different depth layers (diverse depths) are depicted, considering the stress and strain states obtained from numerical simulation (Fig. 8).Regarding the hoop distribution of hydrogen concentration shown in Fig. 10, the hydrogen accumulation is placed out of the contacting plane (= 0º) and surrounding planes where a huge reduction of the hydrogen amount is observed.This reduction becomes lower as the depth from the surface is increased.Out of this zone hydrogen is uniformly distributed for depths up to 86 μm.For distribution obtained at higher depths, hydrogen is progressively increased for > 5º, reaching a maximum hydrogen concentration at = 20º and, hereafter, decreasing slowly.The aforesaid trend is repeated at deeper layers, increasing the maximum hydrogen amount zone as the depth increases (according to the distributions of hydrostatic stress shown in Fig. 6a).For layers placed far from the contact, hydrogen is almost uniformly distributed in the hoop direction, reaching a maximum hydrogen concentration, C/C 0 = 1.13, for a depth 300 m.Hereafter, the hydrogen distribution is progressively decreased, approaching the equilibrium hydrogen concentration for the material free of stress and strain (Ceq/C0 = 1).So, the maximum hydrogen amount is placed out of the contact plane at a depth from the rod surface of 300 μm. To conclude the analysis, the hydrogen distribution in the axial direction is presented in Fig. 11 for diverse depths within the plastic strain affected zone observed previously in Fig. 5. Thus, the hydrogen concentration is highly decreased at the vicinity of the contact zone (0 < z < 1.2 mm).So, the shorter the distance from the contact area, the lower the reduction of hydrogen amount.Thus, for a depth of 700 m, the distribution of hydrogen is rather affected by the stress and strain states generated by HA-RC-MF. CONCLUSIONS n a ball-on-rod test, non-uniform plastic strains are generated on the contact plane where the ball applies a huge pressure to the rod overcoming material yield strength.This state is located near the rod surface with a plastic zone spreading over a maximum depth of 300 m.A huge compressive stress appears in the vicinity of the rod surface; it is progressively reduced as the distance from the surface increases in radial, hoop and axial directions.As a result, hydrogen is accumulated out of the contact plane where a huge reduction of the hydrogen amount is achieved for long times of exposure to the environment due to the high compressive hydrostatic stress in the radial direction, thereby pumping hydrogen towards points outside the contact plane.The maximum hydrogen amount appears for a depth from the surface about 300 m at planes placed 20º out of the contact plane in the contact cross section of the bar. Figure 1 : Figure 1: (a) Scheme of analysed geometry for a ball on rod test and (b) 3D geometry. Figure 2 :Figure 3 :Figure 4 : Figure 2: Distribution of radial stress after the sixth loading cycle: (a) 3D view at the contact of one of the balls and (b) radial distribution for diverse hoop coordinates . Figure 5 : Figure 5: Distribution of equivalent plastic strain after the sixth loading cycle: (a) 3D view at the contact of one of the balls and (b) radial distribution for diverse hoop coordinates . Figure 6 : Figure 6: Distribution of hydrostatic stress: (a) 3D view of the contacting plane and (b) radial distribution for diverse circumferential coordinate . Figure 7 : Figure 7: Circumferential distribution of (a) hydrostatic stress and (b) equivalent plastic strain at diverse layers of the rod between the contacting balls. Figure 8 : Figure 8: Axial distribution of hydrostatic stress for diverse depths (x): (a) general plot and (b) detail plot near the rod surface (zone with strong gradients). Figure 9 : Figure 9: Radial distribution of the hydrogen concentration for diverse circumferential coordinate : (a) general plot and (b) detail plot near the rod surface. Figure 10 : Figure 10: Hydrogen distribution for long times of diffusion in the circumferential direction at diverse layers of the rod between the contacting balls. Figure 11 : Hydrogen distribution for long times of diffusion in the axial direction at diverse layers of the rod between the contacting balls.
5,465.6
2015-06-19T00:00:00.000
[ "Materials Science" ]
MeV Dark Matter: Model Independent Bounds We use the framework of dark matter effective field theories to study the complementarity of bounds for a dark matter particle with mass in the MeV range. Taking properly into account the mixing between operators induced by the renormalization group running, we impose experimental constraints coming from the CMB, BBN, LHC, LEP, direct detection experiments and meson decays. In particular, we focus on the case of a vector coupling between the dark matter and the standard model fermions, and study to which extent future experiments can hope to probe regions of parameters space which are not already ruled out by current data. Introduction and Statement of the Problem The nature of Dark Matter (DM) is one of the greatest puzzles of modern particle physics, as well as the nature of its interactions with the particles of the Standard Model (SM). Despite decades of experimental effort, the only interaction between DM and SM particles that has been confirmed experimentally is gravity. However, if other interactions are present, it would be desirable to have a model independent way to test how much parameter space has been actually tested and if there is room for discovery in future experiments. This can be achieved using Effective Field Theory (EFT) techniques, in which the only light degrees of freedom are the SM particles and the DM (see [1][2][3][4] for some of the early works on the subject). The advantages of this approach are clear: it is as model independent as we can get, and it relies on just a few assumptions (namely, that the New Physics (NP) mediating the DM-SM interactions is heavier than the Electroweak (EW) scale and that it respects the SM gauge symmetry). On the other hand, since all the correlations between different operators (present in any concrete model) are lost, it is usually unfeasible to perform a global analysis involving more than a few operators. Still, many informations can be obtained, and it is on this framework that we will focus. As is well known, most of the theoretical and experimental activity over the last decades has focused on the Weakly Interactive Massive Particle (WIMP) paradigm, i.e. a DM candidate with mass in the GeV-TeV range and with typical cross sections of Electroweak (EW) size. As a matter of fact, the region currently probed by Direct Detection (DD) experiments restrict to DM masses above 5 GeV [5,6]. In addition, one should also consider bounds from indirect detection and collider experiments, and possibly see if the simple thermal freeze-out mechanism can explain the observed DM abundance. We stress that a problem may arise in considering the LHC bounds applied to the EFT operators. Given the high centre of mass energy of the LHC, for any fixed cutoff around a few TeV a part of the produced events will have an energy above the cutoff. These events fall beyond the validity of the EFT, and as such should not be used in the computation of the bounds. This has motivated the use of simplified models [7] as an useful intermediate step between the EFT and complete models. On the other hand, as shown in [8], it is possible to obtain robust collider limits if the centre of mass energy of the event is required to be below the EFT cutoff. 1 One of the most surprising results of the analysis in [8] is that, applying naive power counting to the Wilson coefficients and making a one-coupling-onescale assumption (in the sense that only one cutoff scale Λ and one coupling g * appear in all the EFT operators), only g * 2 couplings are currently probed at the LHC. The same analysis has been applied to other operators in [12,13]. Given the plethora of null results challenging the WIMP paradigm, in the last few years the interest has turned to other regions of parameter space. In particular, the MeV region has emerged as an interesting possibility, with many well motivated models (see for example the SIMP case [14,15], some models of asymmetric DM [16] and even some supersymmetric model [17,18]). The purpose of this paper is to extend the model independent EFT analysis to the case of MeV DM, highlighting the complementarity of searches and pointing out to which extent and in which cases we should expect some signal in future experiments (especially in the g * = 1 case in which no LHC limits are available). Although less explored, some constraints are already available on the MeV DM parameter space. For instance, Cosmic Microwave Background (CMB) bounds already force the s-wave annihilation cross section into SM particles to be below the thermal one for masses below 10 GeV [19][20][21][22][23], and Big Bang Nucleosynthesis (BBN) bounds may put strong constraints on the annihilation into quarks [24]. This means that, if the DM has indeed a sub-GeV mass, the thermal freezeout paradigm has to be abandoned unless the dominant annihilation channels are p-wave suppressed. Moreover, bounds coming from colliders [12,13,25], meson decays [26,27], indirect searches [28,29] and Z-physics at LEP [30] must also be considered. Finally, the MeV region can in principle be probed in the future by DD experiments measuring DM-electron scattering [31][32][33][34][35][36][37][38] and in high intensity neutrino beam facilities [39][40][41]. As can be seen, different SM particles are involved in the processes considered. As such, it looks like the only situation in which these constraints can be combined in a meaningful way to put bounds on the EFT coefficients is when the DM couples universally to all SM particles at the scale Λ where the interactions are generated. However, this is not the only possibility. As shown in [42][43][44], dimension 6 operators mix in the running between Λ and the low energy scale at which the experiments are performed. The result is that even if some operator is not present at high energy due to some unknown selection rule of the UV theory, it will be generated at low energy by the renormalization group running. Of course, how important this mixing is in imposing bounds depends crucially on the initial value of the Wilson coefficient at the scale Λ, and on the scale Λ itself. Using this information, the complementarity of bounds in DM searches for a DM mass above 10 GeV has been explored in [44] for universal coupling to quarks, to leptons and to third generation fermions and in [45] for Z models. Moreover, bounds on pure leptophilic models coming from the LHC have been analyzed in [46]. Models with the correct relic density for MeV dark matter are given, for example, in [47,48]. The paper is organized as follows. We first briefly recall the relevant operators which we will consider throughout the article. We then present current and future constraints that apply to MeV DM. Finally, we put together all the constraints taking properly into account the Renormalization Group Equations (RGE's) and show the available parameter space for some UV configuration. DM EFT and Running As already mentioned in the Introduction, the main hypotheses behind DM-EFT are that the only light degrees of freedom below the cutoff are the SM particles and the DM, and that at the cutoff the whole SM gauge symmetry is respected. For definitiveness, we will always take the DM to be a Dirac singlet fermion χ with mass m DM . At the scale Λ the lagrangian is given by where L SM is the SM lagrangian and the dots represent all the operators constructed out of the SM particles only (see [49] for the complete list). In writing Eq. (2.1) we are making a one-coupling-one-scale assumption (with the coupling g * = g * (Λ) defined at the scale Λ) and we restrict the sum over dimension 6 operators only. For our purposes, this is justified since operators of dimension 5, do not mix under renormalization [43] (although they generate dimension 7 operators that can mix below the EW scale [42]). At the dimension 6 level, 32 operators are present [43]. All of them can be written as the product of a DM and a SM current where J µ χ = {χγ µ χ, χγ µ γ 5 χ} and the SM currents are given by uγ µ u, dγ µ d, eγ µ e, uγ µ γ 5 u, dγ µ γ 5 d, eγ µ γ 5 e below m Z , where the first line is appropriate in the unbroken EW phase (above m Z ) while the second one is appropriate in the broken EW phase (below m Z ). Above m Z each operator appears three times, one for each generation (for simplicity, we will assume throughout the paper that the SM currents are flavor conserving), while below m Z the top quark does not appear, since it has being integrated out. Notice that since iH † ↔ Dµ H = √ g 2 +g 2 2 (h + v) 2 Z µ , this operator does not appear below m Z because both the Z and the h bosons have been integrated out. The anomalous dimension matrices that mix the effective operators in the running above and below m Z have been computed in [42][43][44], and are independent from the form of the DM current (since the DM current is a complete SM singlet, it does not contribute to the running). In order to compute the Wilson coefficient of the various operators at a scale µ relevant for the experiments we use the public code RunDM [50]. A comment about the interpretation of the results is in order. Our exclusions are strictly valid for experiments performed with a typical energy E Λ, since for these cases the mediator can obviously be integrated out. For experiments performed with E Λ (as can be the case for LEP II or the LHC, as we will see below), the bounds should be computed keeping the mediator that generates Eq. (2.1) in the spectrum. However, such bounds are model dependent (the details of how the mediator couples to the DM are important when considering the resonant production). As shown in [8], the approach we will take gives a model independent bound even in the region in which the resonant production of the mediator is important, in the sense that we exclude a smaller region in parameter space. In what follows, we will focus on the so called D5 operator [4], which is the product between the vector DM current and a vector SM current. In particular, we will analyze a leptophobic case, with universal coupling to quarks only, and a leptophilic case with universal coupling to leptons only, We will briefly comment on other possibilities at the end of Section 4. Experimental bounds In this section we present the bounds from current or past experiments that can be applied to MeV DM. We summarize in Table 1 all the experimental bounds and the operators to which they apply. Bounds on the annihilation cross section The DM annihilation cross section is bounded by CMB, BBN and indirect detection constraints. Self annihilation of dark matter particles may inject hadronic or electromagnetic energy in the intergalactic medium that may alter the thermal history of the Universe. Since recombination and primordial nucleosynthesis are well understood, bounds from the CMB and from BBN are in general important. In the case of CMB, free electrons remaining after recombination can scatter off CMB photons and modify the CMB power spectrum. CMB data from WMAP and Planck set limits on the annihilation parameter P ann ≡ f (z) σv /m DM , given in terms of the thermally averaged cross section σv and the dark matter particle mass m DM . The redshift dependent efficiency function f (z) represents the amount of energy absorbed overall by the gas, and it is species dependent. The latest constraint from the Planck Collaboration is P ann < 4.1 × 10 −28 cm 3 /s/GeV at 95% C.L. [23]. The CMB bound already rules out Experiment Process Operators involved thermal s-wave annihilation cross sections for m DM 10 GeV [19][20][21][22][23]. In the future, Cosmic Variance Limited experiments have the potential to constrain P ann < 8.9 × 10 −29 cm 3 /s/GeV [19], i.e. a factor of ∼ 5 more stringent than current bounds. In the case of MeV DM, additional care must be taken in the computation of the CMB bound because the DM pair will annihilate to mesons rather than quarks. The coupling between mesons and the DM currents has been computed in [51] in the context of Chiral Perturbation Theory (see also Appendix A for more details), and in our computation we will consider all possible decays into light mesons and light leptons. We list in Table 1 the operators involved. In order to impose the bounds from CMB, we use the Equations in Appendix A taking the appropriate thermal average. We set f (z) = 1 for the annihilation to mesons and we take the bound on the annihilation cross section to electrons from [52]. The choice f (z) = 1 is an overestimate of the bound. It turns out, however, that even with f (z) = 1 the meson contribution to P ann is always subdominant with respect to the electron one. Therefore, in setting the limits in Sec. 4, we will consider only the annihilation to electrons. Turning to primordial nucleosynthesis, the injection of electromagnetic or hadronic energy in the intergalactic medium can dissociate already formed nuclei or can alter the neutron/proton ratio through pion exchange. The case of sub-GeV DM has been considered in Ref. [24]. Overproduction of 3 He put bounds on the annihilation cross section into electrons, while deuterium overproduction put bounds on the χχ →bb annihilation cross section. The bound on χχ → e + e − is always weaker than the CMB bound, while for a DM mass between 4 GeV and 20 GeV the bound on σv χχ→bb is slightly stronger than the CMB one. As we are going to see, though, in the same region the bound coming from direct detection experiments is always stronger. Concerning indirect detection searches, bounds on MeV DM coming from diffuse Xray and Gamma ray observations have been computed in [28], while more recently bounds from cosmic rays electrons and positrons have been computed in [29] (see also [53]). In the decays of diffuse X-ray and Gamma ray, model independent bounds can be put on the annihilation cross section to electrons. For m DM 30 MeV, the limit coming from INTEGRAL e COMPTEL is of order σv rel 10 −27 cm 3 /s, while for larger DM mass the bound becomes less and less stringent until it reaches the FERMI value σv rel 10 −24 cm 3 /s for m DM 1 GeV [28]. Turning to cosmic ray data, limits can be extracted from Voyager 1 and AMS-02 [29]. For masses around m DM 10 MeV, the limits are slightly more stringent than those obtained from diffuse X and Gamma ray data. Still, they are roughly an order of magnitude weaker than those obtained from CMB. Collider constraints: LEP and the LHC As shown in [25], mono-photon searches at LEP II can put bounds on the operators involving electrons listed in Table 1, although only the bounds on the operators (χγ µ χ)(eγ µ e) and (χγ µ γ 5 χ)(eγ µ γ 5 e) have been computed. For m DM 20 GeV, the constraint on the two operators is the same and is as strong as Λ ∼ 500 GeV for Wilson coefficients equal to 1. As explained in the introduction, to ensure the validity of the EFT in the considered events, the cut E cm = (p DM,1 + p DM,2 + p γ ) 2 < Λ should be imposed, analogous to what proposed in [8,12,13]. A complication arises, however. The monophoton data were collected with centre of mass energies scanning between 180 GeV and 209 GeV, so that it is not completely well defined which energy scale should be used in the computation of the Wilson coefficient. Since the analysis of Ref. [25] was performed supposing E cm = 200 GeV, in what follows we will simply take all the coefficients computed at a scale µ 200 GeV, and declare that the scales below this energy cannot be probed inside the validity of the EFT. Other signatures can be better exploited at the LHC. In particular, the strongest experimental constraints come from mono-jet searches [54][55][56][57] can be used to put bounds on the operators listed in Table 1. We recast the ATLAS search [55], imposing the cut E cm < Λ, where E cm is the centre of mass energy of the process, E cm ≡ (p DM,1 + p DM,2 + p j ) 2 . The ATLAS analysis taken in consideration allows for multiple jets and the cuts require at least one jet with a p T > 120 GeV, allowing for the presence of soft and collinear jets. We implement the dimension six operators in Feynrules [58] and use MadGraph5 aMC@NLO [59] to generate events at matrix element level with the mono-jet topology. We then pass the events to PYTHIA 6 for parton showering and hadronisation [60]. In particular, we generate 200k events at parton level with 0-, 1-and 2-jets and we perform the final recast with MadAnalysis5 [61], modifying existing code [12,62]. One of the main outcomes of the cut is that for couplings g * = 1 no bound is found, while for g * = 4π we find that, for m DM 100 GeV, the region 400 GeV Λ 12 TeV is excluded. In particular, the region Λ 400 GeV is not currently probed by the LHC, not even for large couplings. Let us conclude with some remarks on the bounds that can be extracted from Z physics. When the "Higgs portal" operators (χγ µ χ)(iH † ↔ Dµ H) and (χγ µ γ 5 χ)(iH † ↔ Dµ H) are generated at the scale Λ, they have two effects. First, they contribute with a threshold correction to the evolution of the four fermion operators (through the SM coupling between the Z boson and the SM fermions) [43]. Second, they generate a Z − χ − χ interaction that can be bound from the Z invisible width, Γ N P inv < 1.5 MeV [30]. Both effects can be used to set stringent limits on the parameter space. Mesons decays For MeV DM the invisible decays of mesons play an important role. In what follows, we will focus for simplicity only on the invisible decays arising at tree level. In particular, we consider the decays Υ → χχ and J/ψ → χχ [26]. 2 Since m Υ 10 GeV and m J/ψ 3 GeV, the bounds will be relevant for m DM < 5 GeV and m DM < 1.5 GeV, respectively, and the probed Λ scales will be above the Υ and the J/ψ masses. The meson masses are also the typical energy of the process, at which the relevant Wilson coefficients must be computed. The angular momentum and C/P transformation properties of the initial state determine which operators is involved in the decay process, and the different possibilities are listed in Table 1. The 90% C.L. constraint on the branching ratio for invisible decays of Υ(1S) and J/Ψ measured by BABAR and BES are BR(Υ(1S) → invisible) < 3.0 × 10 −4 and BR(J/Ψ → invisible) < 7.2 × 10 −4 . On the other hand, the meson decays toνν via a Z boson are negligible: BR(Υ(1S) →νν) < 9.8 × 10 −6 and BR(J/Ψ →νν) < 2.77 × 10 −8 . It is therefore enough to compute the branching fraction for the bound state to decay to DM. For each process, the bounds are practically equal for all the operators involved, and can be as strong as Λ 200 GeV for the Υ → χχ decay for an UV Wilson coefficient of order unity [26,27]. Although per se the bound is not very strong (much weaker than the LEP or LHC one, for instance), it is helpful to close in a model independent way the small Λ window left open by colliders. In the future, according to [63], we can expect roughly a factor of 10 improvement in the sensitivity of BR(Υ → χχ) at Belle II, which translates in an improvement of the bound on Λ of about a factor of 2. Direct Detection Experiments Direct detection experiments have set stringent constraints on the dark matter nucleon scattering cross section for dark matter masses larger than ∼ 5 -10 GeV. Indeed, for spin independent scattering LUX and Xenon1T have reached a cross section limit of ∼ 10 −46 cm 2 for m DM ∼ 30 GeV at 90% C.L. [5,6]. On the other hand, the low mass region is weakly probed. Indeed for light dark matter, the fraction of initial energy transferred to the nucleus is suppressed by m DM /m N , leading to negligible recoil energy. The LUX and Xenon1T experiments probe DM scattering with nucleons only down to masses of about 5 GeV, the maximum exclusion being σ SI 10 −42 cm 2 . However, the forecasted sensitivity of SuperCDMS in the Si and Ge modes will be able to probe the MeV parameter region down to m DM 400 MeV, with sensitivity to exclude DM-nucleon cross sections down to 10 −(39÷43) cm 2 (depending on the DM mass) [64]. Although the recoil energy for a light dark matter particle scattering off nuclei is negligible, the kinetic energy involved in the process is large enough to ionize the target atom. Experiments like Xenon10 and Xenon100 could detect the ionization of a single atom [32]. The Xenon10 experiment, using only 12 days of calibration data, can weakly probe the scattering cross section of MeV dark matter on free electrons as low as ∼ 10 −38 cm 2 . Despite the weak bound, future experiments (or the analysis of data of current experiments as Xenon100 and LUX) could produce competitive limits. Moreover, different materials and process can increase the limit on DM scattering off electrons [31,[33][34][35][36][37][38]. The most promising process seems to be the DM scattering off electrons in semiconductor targets [33], which can reach a sensitivity of about σ e 10 −(43÷42) cm 2 . We refer to Table 1 for the list of the operators contributing to the DM-electron scattering and that give an unsuppressed contribution to Spin Independent (SI) direct searches. The Wilson coefficients should be computed at the scale µ 1 GeV. 3 Relic Density Now we discuss how to obtain the correct relic density for MeV DM. As pointed out in Section 3.1, CMB bounds generically rule out a thermal annihilation s-wave cross section for DM masses m DM 10 GeV. This leaves open the possibility that either the DM is produced thermally via p-wave annihilation, or that a non-thermal production mechanism must be invoked. In the case of the D5 operator (see the end of Sec. 2), the annihilation cross section is s-wave and the relic abundance should be produced non-thermally. Following Ref. [65], we can compute the relic abundance inside the validity of the EFT if we suppose that the reheating temperature at the end of inflation is small enough not to produce the degrees of freedom that have been integrated out, i.e. T RH < Λ. In this case, most of the DM production happens at temperatures much larger than the mass of DM or SM particles. Considering for simplicity a universal coupling to all fermions, we get [65] where T RH is the reheating temperature, T 0 = 2.7 K the present temperature and g s * the number of effective degrees of freedom in entropy. Imposing Ω DM h 2 0.12, we get that the value of Λ able to reproduce the observed relic abundance is As we are going to see in Section 4, this region of parameter space is not currently probed, and will not be probed in future experiments. Summary of the constraints In this section we compare all the present bounds and future sensitivities discussed in Section 3. We are interested in determining in which cases, if any, the parameter space to be probed by future experiments is already ruled out in a model independent way by current results. In the dark matter effective field theory we have two mass scales, the dark matter mass and the cut off scale Λ, and one coupling. We will present our results in a two-dimensional parameter space (m DM , Λ), fixing the effective coupling to the maximum value allowed by perturbativity, g * = 4π, and to g * = 1. For concreteness, and to avoid bounds from structure formation [66,67], we focus on the mass range 1 MeV ≤ m DM ≤ 10 GeV. 4 We consider two benchmark models for the operator O D5 introduced at the end of Section 2: universal couplings to all quarks (Sec. 4.1) and universal couplings to all leptons (Sec. 4.2). We will comment on other possibilities in Section 4.3. Universal couplings to quarks: leptophobic case We start considering the case in which the DM vector current couples only to the quark vector current with flavor universal couplings. At the scale Λ > m Z the effective interactions are described by while for Λ < m Z the top has to be consistently integrated out. We would therefore expect only experiments involving interactions between quarks and dark matter to contribute. However the running of the Wilson coefficient will induce also low energy couplings with leptons c ( ) V . Solving the RGE's in the leading log approximation [44,46], we get where the Heaviside function is needed when Λ is below the EW scale. This leads to the possibility to get limits on Λ from LEP and from future DM-electron scattering experiments. In Fig. 1 we show the excluded parameter space in the (m DM , Λ) plane. In the upper (lower) panels we show the results for g * = 4π (g * = 1), while the left (right) panels show the current (future) exclusions. Let us start with g * = 4π. In the upper left panel, the large couplings at the scale Λ lead to important LHC bounds (blue region), as discussed in Sec. 3.2. Moreover, the induced coupling with electrons is also sizeable, such that the limits from LEP also apply (this is the reason why the collider bounds goes down to Λ 200 GeV). The yellow area is excluded by mesons decay. In particular, the upper limit of about 2 TeV is set by the Υ(1s) to invisible decay (hence it applies to DM masses up to 5 GeV). In the left panels the blue, red, yellow, green and purple regions are ruled out respectively by collider (LEP and LHC), direct detection (LUX), meson decays (BaBar and BES), CMB experiments and BBN. The grey region represents the limit of validity of the EFT, Λ < 2 m DM . In the right panel the green, emerald and orange regions will be probed respectively by future CMB, DM electron scattering and direct detection (SuperCDMS) experiments, while the grey area is already excluded by current experiments. The two upper panels consider an effective coupling g * = 4π for current (left) and future (right) experiments. The lower panel shows results for an effective coupling g * = 1. The lower limit is instead set by a combination of the bounds of the Υ(1s) and J/Ψ decays. The J/Ψ decay sets a stronger lower limit for DM masses up to 1.5 GeV, where we clearly see the threshold due to the closure of this channel. The limits from CMB are able to cover the whole range of Λ not covered by collider and meson decays, since the annihilation cross section is s-wave (see Appendix A). As expected, the direct detection bounds from LUX are relevant only for m DM 5 GeV. Concerning future experiments (upper right panel), DM-electron scattering and limits from CMB will only be able to probe a large part of the parameter space already ruled out. Interesting information will instead come from future direct detection experiments as Super-CDMS. Turning to g * = 1, as expected the bounds are much less severe. First, there are no LHC bounds. This is due to the issue of the validity of the EFT, as discussed in Section 3.2. In addition, the g * coupling at Λ is now too small to induce a relevant coupling to electrons, in such a way that also the LEP bound is not present. The limits from mesons decay and from LUX are weaker because they just rescale with the coupling. As for the CMB limits, we see that for DM masses above 0.01 GeV, the bound is basically a rescaling of the bound in the upper panel (although, being the coupling generated through running, there are some distortions). Below this mass we suddenly lose sensitivity due to the fact that we compute the Wilson coefficient at a scale µ 1 GeV, instead of µ 2m DM . The same happens in the right panel for the region probed by DM scattering off electrons. Future direct detection experiments as SuperCDMS will set strong limits on the scale Λ for DM masses above ∼ 300 MeV. For small couplings competitive bounds may come from DM-electron experiments for the region with small DM mass and small Λ. Let us stress that, comparing the bounds in Fig. 1 with Eq. (3.2), we see that not even the future experiments will be able to probe the region in which non-thermal relic production is effective. Universal couplings to leptons: leptophilic case As a second scenario, we consider a SM gauge invariant effective field theory where the dark matter current couples universally only to leptons: In this case, the running induces low energy Wilson coefficients with light quarks [44] c (u) The presence of this couplings makes possible constraints from direct detection experiments. Indeed, the contact interaction at the scale Λ do not involve light quarks and the dark matter nucleon scattering cross section comes only from radiatively induced interactions with light quarks. The same happens for meson decays. This result is visible in the top panels of Fig. 2, where we show the constraints for this scenario with g * (Λ) = 4π. The strongest limits comes from colliders (blue region), via the LEP experiment, that exclude Λ to be between ∼ 200 GeV and ∼ 6 TeV and from CMB (green), that strongly constrains the annihilation cross section to electrons. The constraints form meson decays (yellow) are weaker, due to the fact that the couplings to light quarks arise only radiatively, and exclude Λ between ∼ 3 and ∼ 100 GeV, for dark matter masses below 5 GeV. For a dark matter heavier than 5 GeV, the strongest limits are due to LUX (red). The right panel of the first row shows the reach of future CMB (green), DM-electron scattering (emerald) and direct detection experiments (orange). The bottom panels show the exclusions for a coupling g * = 1. With such small couplings, the running of the Wilson coefficients is not enough to set bounds from meson decays and the bounds from LUX are reduced. The absence of the meson decay limits leaves unexplored a small region between the LEP lower limit and the CMB bound. This region will be hardly covered with the next generation CMB or DM-electron scattering experiments. As it happens in the leptophobic case, comparing the bounds in Fig. 2 with Eq. (3.2), we see that also for the leptophilic case the region in which non-thermal relic production produces the correct relic abundance is not and will not be probed by future experiments. Other cases Here we discuss a few other possibilities that may arise. First, we consider the situation in which the D5 operator involves a universal coupling with all the SM fermions. Since in this case all the couplings are turned on at tree level, the running will have a rather minor effect on the bounds. In fact, we have checked that the excluded regions correspond to the strongest constraints coming from the leptophobic and leptophilic case analyzed in the two previous sections: for g * = 4π, all the region below Λ 10 TeV is probed, with the bound set by the LHC limit. On the other hand, for g * = 1 the upper bound is dominated by the LEP constraint, with Λ 500 GeV excluded. Another interesting situation is given by the so called "Higgs portal". In this case, only one between the operators (χγ µ χ)(iH † ↔ Dµ H) and (χγ µ γ 5 χ)(iH † ↔ Dµ H) is turned on at the scale Λ. The coupling to fermions arise below m Z , once the Z boson is integrated out. As discussed in Section 3.2, the Higgs portal operators induce a Zχχ coupling that can be bound by the Z invisible decay width. This bound turns out to be strong, and we have checked that for g * = 4π most of the parameter space is excluded for Λ 10 TeV, while for g * = 1 the bound is relaxed to Λ 1 TeV. Moreover, this operator induces severe constraints from dark matter scattering off nuclei for m DM 5 GeV [43]. Conclusions As more and more parameter space is ruled out by experiments without any clear signal of dark matter discovery, it is timely to explore new venues and new regions of parameter space traditionally neglected. In this paper we have analyzed the case of dark matter with mass in the MeV range, i.e. below the reach of current direct detection experiments. This region is particularly interesting since can be probed at future direct detection experiments involving the dark matter scattering off electrons. Using a model independent approach, we have added to the standard model lagrangian all the dimension 6 effective operators that can involve dark matter, and we have properly taken into account the mixing between operators induced in the renormalization group running. Our main results are summarized in Figures 1 and 2. As can be seen, large portions of parameter space are already probed in a model independent way. Although the exact value of the maximum scale Λ already excluded highly depends on the structure and the size of the UV couplings, it is clear from the plots that, under our assumptions, most of the parameter space to which future electron scattering and CMB experiments are sensitive is already ruled out. We stress that since most of the bounds involve scales below the top mass, in this case Λ should be interpreted as the mass of some mediator generating the relevant operators. Our bound applies also to this case in the limit in which we neglect effects involving the resonant production of the mediator. Acknowledgments We would like to thank S. Bruggisser, S. Fichet, F. Iocco, B. Kavanagah, M. Taoso and A. Urbano for valuable discussions and suggestions. This work was supported by Fundação de Amparoà Pesquisa do Estado de São Paulo (FAPESP) and Conselho Nacional de Ciência e Tecnologia (CNPq). A Useful Formulas We present in this Appendix some useful formula. To set the notation, we write a generic coupling between the Dark Matter and the Standard Model fermions as Following Ref. [51], the lowest order chiral perturbation theory lagrangian coupling DM to mesons is given by where as usual the mesons hermitian matrix reads while the vector spurions including the DM currents are defined as ν µ χ = diag(c V V u , c V V d , c V V s ) Λ 2 χγ µ χ + diag(c AV u , c AV d , c AV s ) Λ 2 χγ µ γ 5 χ, a µ χ = diag(c V Au , c V Ad , c V As ) Λ 2 χγ µ χ + diag(c AAu , c AAd , c AAs ) Λ 2 χγ µ γ 5 χ. (A.4) Using the previous definitions, the annihilation cross section χχ → M M into mesons M , for a vector DM current, is given by where m M is the meson mass and the relevant couplings are
8,499.4
2017-07-03T00:00:00.000
[ "Physics" ]